Skip to content

v0_5 ¤

Classes:

Name Description
ArchitectureFromFileDescr
ArchitectureFromLibraryDescr
Author
AxisBase
AxisId
BadgeDescr

A custom badge

BatchAxis
BiasRisksLimitations

Known biases, risks, technical limitations, and recommendations for model use.

BinarizeAlongAxisKwargs

key word arguments for [BinarizeDescr][]

BinarizeDescr

Binarize the tensor with a fixed threshold.

BinarizeKwargs

key word arguments for [BinarizeDescr][]

BioimageioConfig
CallableFromDepencency
ChannelAxis
CiteEntry

A citation that should be referenced in work using this resource.

ClipDescr

Set tensor values below min to min and above max to max.

ClipKwargs

key word arguments for [ClipDescr][]

Config
DataDependentSize
DatasetDescr

A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage

DatasetId
Datetime

Timestamp in ISO 8601 format

DeprecatedLicenseId
Doi

A digital object identifier, see https://www.doi.org/

EnsureDtypeDescr

Cast the tensor data type to EnsureDtypeKwargs.dtype (if not matching).

EnsureDtypeKwargs

key word arguments for [EnsureDtypeDescr][]

EnvironmentalImpact

Environmental considerations for model training and deployment.

Evaluation
FileDescr

A file description

FixedZeroMeanUnitVarianceAlongAxisKwargs

key word arguments for [FixedZeroMeanUnitVarianceDescr][]

FixedZeroMeanUnitVarianceDescr

Subtract a given mean and divide by the standard deviation.

FixedZeroMeanUnitVarianceKwargs

key word arguments for [FixedZeroMeanUnitVarianceDescr][]

HttpUrl

A URL with the HTTP or HTTPS scheme.

Identifier
IndexAxisBase
IndexInputAxis
IndexOutputAxis
InputTensorDescr
IntervalOrRatioDataDescr
KerasHdf5WeightsDescr
KerasV3WeightsDescr
LicenseId
LinkedDataset

Reference to a bioimage.io dataset.

LinkedModel

Reference to a bioimage.io model.

LinkedResource

Reference to a bioimage.io resource

Maintainer
ModelDescr

Specification of the fields used in a bioimage.io-compliant RDF to describe AI models with pretrained weights.

ModelId
NominalOrOrdinalDataDescr
OnnxWeightsDescr
OrcidId

An ORCID identifier, see https://orcid.org/

OutputTensorDescr
ParameterizedSize

Describes a range of valid tensor axis sizes as size = min + n*step.

PytorchStateDictWeightsDescr
RelativeFilePath

A path relative to the rdf.yaml file (also if the RDF source is a URL).

ReproducibilityTolerance

Describes what small numerical differences -- if any -- may be tolerated

ResourceId
RunMode
ScaleLinearAlongAxisKwargs

Key word arguments for [ScaleLinearDescr][]

ScaleLinearDescr

Fixed linear scaling.

ScaleLinearKwargs

Key word arguments for [ScaleLinearDescr][]

ScaleMeanVarianceDescr

Scale a tensor's data distribution to match another tensor's mean/std.

ScaleMeanVarianceKwargs

key word arguments for [ScaleMeanVarianceKwargs][]

ScaleRangeDescr

Scale with percentiles.

ScaleRangeKwargs

key word arguments for [ScaleRangeDescr][]

Sha256

A SHA-256 hash value

SiUnit

An SI unit

SigmoidDescr

The logistic sigmoid function, a.k.a. expit function.

SizeReference

A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.

SoftmaxDescr

The softmax function.

SoftmaxKwargs

key word arguments for [SoftmaxDescr][]

SpaceAxisBase
SpaceInputAxis
SpaceOutputAxis
SpaceOutputAxisWithHalo
StardistPostprocessingDescr

Stardist postprocessing including non-maximum suppression and converting polygon representations to instance labels

StardistPostprocessingKwargs2D
StardistPostprocessingKwargs3D
TensorDescrBase
TensorId
TensorflowJsWeightsDescr
TensorflowSavedModelBundleWeightsDescr
TimeAxisBase
TimeInputAxis
TimeOutputAxis
TimeOutputAxisWithHalo
TorchscriptWeightsDescr
TrainingDetails
Uploader
Version

wraps a packaging.version.Version instance for validation in pydantic models

WeightsDescr
WeightsEntryDescrBase
WithHalo
ZeroMeanUnitVarianceDescr

Subtract mean and divide by variance.

ZeroMeanUnitVarianceKwargs

key word arguments for [ZeroMeanUnitVarianceDescr][]

Functions:

Name Description
convert_axes
generate_covers
validate_tensors

Attributes:

Name Type Description
ANY_AXIS_TYPES

intended for isinstance comparisons in py<3.10

AnyAxis
AxisType
BATCH_AXIS_ID
BioimageioYamlContent
FileDescr_dependencies
FileDescr_external_data
INPUT_AXIS_TYPES

intended for isinstance comparisons in py<3.10

IO_AxisT
InputAxis
IntervalOrRatioDType
KnownRunMode
NominalOrOrdinalDType
NonBatchAxisId
NotEmpty
OUTPUT_AXIS_TYPES

intended for isinstance comparisons in py<3.10

OutputAxis
ParameterizedSize_N

Annotates an integer to calculate a concrete axis size from a ParameterizedSize.

PostprocessingDescr
PostprocessingId
PreprocessingDescr
PreprocessingId
SAME_AS_TYPE
SpaceUnit

Space unit compatible to the OME-Zarr axes specification 0.5

SpecificWeightsDescr
TVs
TensorDataDescr
TensorDescr
TimeUnit

Time unit compatible to the OME-Zarr axes specification 0.5

VALID_COVER_IMAGE_EXTENSIONS
WeightsFormat

ANY_AXIS_TYPES module-attribute ¤

intended for isinstance comparisons in py<3.10

AnyAxis module-attribute ¤

AnyAxis = Union[InputAxis, OutputAxis]

AxisType module-attribute ¤

AxisType = Literal[
    "batch", "channel", "index", "time", "space"
]

BATCH_AXIS_ID module-attribute ¤

BATCH_AXIS_ID = AxisId('batch')

BioimageioYamlContent module-attribute ¤

BioimageioYamlContent = Dict[str, YamlValue]

FileDescr_dependencies module-attribute ¤

FileDescr_dependencies = Annotated[
    FileDescr_,
    WithSuffix((".yaml", ".yml"), case_sensitive=True),
    Field(examples=[dict(source="environment.yaml")]),
]

FileDescr_external_data module-attribute ¤

FileDescr_external_data = Annotated[
    FileDescr_,
    WithSuffix(".data", case_sensitive=True),
    Field(examples=[dict(source="weights.onnx.data")]),
]

INPUT_AXIS_TYPES module-attribute ¤

intended for isinstance comparisons in py<3.10

IO_AxisT module-attribute ¤

IO_AxisT = TypeVar('IO_AxisT', InputAxis, OutputAxis)

InputAxis module-attribute ¤

InputAxis = Annotated[
    _InputAxisUnion, Discriminator("type")
]

IntervalOrRatioDType module-attribute ¤

IntervalOrRatioDType = Literal[
    "float32",
    "float64",
    "uint8",
    "int8",
    "uint16",
    "int16",
    "uint32",
    "int32",
    "uint64",
    "int64",
]

KnownRunMode module-attribute ¤

KnownRunMode = Literal['deepimagej']

NominalOrOrdinalDType module-attribute ¤

NominalOrOrdinalDType = Literal[
    "float32",
    "float64",
    "uint8",
    "int8",
    "uint16",
    "int16",
    "uint32",
    "int32",
    "uint64",
    "int64",
    "bool",
]

NonBatchAxisId module-attribute ¤

NonBatchAxisId = Annotated[AxisId, Predicate(_is_not_batch)]

NotEmpty module-attribute ¤

NotEmpty = Annotated[S, annotated_types.MinLen(1)]

OUTPUT_AXIS_TYPES module-attribute ¤

intended for isinstance comparisons in py<3.10

OutputAxis module-attribute ¤

OutputAxis = Annotated[
    _OutputAxisUnion, Discriminator("type")
]

ParameterizedSize_N module-attribute ¤

ParameterizedSize_N = int

Annotates an integer to calculate a concrete axis size from a ParameterizedSize.

PostprocessingId module-attribute ¤

PostprocessingId = Literal[
    "binarize",
    "clip",
    "ensure_dtype",
    "fixed_zero_mean_unit_variance",
    "scale_linear",
    "scale_mean_variance",
    "scale_range",
    "sigmoid",
    "softmax",
    "zero_mean_unit_variance",
]

PreprocessingId module-attribute ¤

PreprocessingId = Literal[
    "binarize",
    "clip",
    "ensure_dtype",
    "fixed_zero_mean_unit_variance",
    "scale_linear",
    "scale_range",
    "sigmoid",
    "softmax",
]

SAME_AS_TYPE module-attribute ¤

SAME_AS_TYPE = '<same as type>'

SpaceUnit module-attribute ¤

SpaceUnit = Literal[
    "attometer",
    "angstrom",
    "centimeter",
    "decimeter",
    "exameter",
    "femtometer",
    "foot",
    "gigameter",
    "hectometer",
    "inch",
    "kilometer",
    "megameter",
    "meter",
    "micrometer",
    "mile",
    "millimeter",
    "nanometer",
    "parsec",
    "petameter",
    "picometer",
    "terameter",
    "yard",
    "yoctometer",
    "yottameter",
    "zeptometer",
    "zettameter",
]

Space unit compatible to the OME-Zarr axes specification 0.5

TVs module-attribute ¤

TVs = Union[
    NotEmpty[List[int]],
    NotEmpty[List[float]],
    NotEmpty[List[bool]],
    NotEmpty[List[str]],
]

TensorDataDescr module-attribute ¤

TensorDescr module-attribute ¤

TensorDescr = Union[InputTensorDescr, OutputTensorDescr]

TimeUnit module-attribute ¤

TimeUnit = Literal[
    "attosecond",
    "centisecond",
    "day",
    "decisecond",
    "exasecond",
    "femtosecond",
    "gigasecond",
    "hectosecond",
    "hour",
    "kilosecond",
    "megasecond",
    "microsecond",
    "millisecond",
    "minute",
    "nanosecond",
    "petasecond",
    "picosecond",
    "second",
    "terasecond",
    "yoctosecond",
    "yottasecond",
    "zeptosecond",
    "zettasecond",
]

Time unit compatible to the OME-Zarr axes specification 0.5

VALID_COVER_IMAGE_EXTENSIONS module-attribute ¤

VALID_COVER_IMAGE_EXTENSIONS = (
    ".gif",
    ".jpeg",
    ".jpg",
    ".png",
    ".svg",
)

WeightsFormat module-attribute ¤

WeightsFormat = Literal[
    "keras_hdf5",
    "keras_v3",
    "onnx",
    "pytorch_state_dict",
    "tensorflow_js",
    "tensorflow_saved_model_bundle",
    "torchscript",
]

ArchitectureFromFileDescr pydantic-model ¤

Bases: _ArchitectureCallableDescr, FileDescr

Show JSON schema:
{
  "$defs": {
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    },
    "YamlValue": {
      "anyOf": [
        {
          "type": "boolean"
        },
        {
          "format": "date",
          "type": "string"
        },
        {
          "format": "date-time",
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        },
        {
          "type": "string"
        },
        {
          "items": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "array"
        },
        {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "object"
        },
        {
          "type": "null"
        }
      ]
    }
  },
  "additionalProperties": false,
  "properties": {
    "source": {
      "anyOf": [
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "$ref": "#/$defs/RelativeFilePath"
        },
        {
          "format": "file-path",
          "title": "FilePath",
          "type": "string"
        }
      ],
      "description": "Architecture source file",
      "title": "Source"
    },
    "sha256": {
      "anyOf": [
        {
          "description": "A SHA-256 hash value",
          "maxLength": 64,
          "minLength": 64,
          "title": "Sha256",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "SHA256 hash value of the **source** file.",
      "title": "Sha256"
    },
    "callable": {
      "description": "Identifier of the callable that returns a torch.nn.Module instance.",
      "examples": [
        "MyNetworkClass",
        "get_my_model"
      ],
      "minLength": 1,
      "title": "Identifier",
      "type": "string"
    },
    "kwargs": {
      "additionalProperties": {
        "$ref": "#/$defs/YamlValue"
      },
      "description": "key word arguments for the `callable`",
      "title": "Kwargs",
      "type": "object"
    }
  },
  "required": [
    "source",
    "callable"
  ],
  "title": "model.v0_5.ArchitectureFromFileDescr",
  "type": "object"
}

Fields:

Validators:

  • _validate_sha256

callable pydantic-field ¤

callable: Annotated[
    Identifier,
    Field(examples=["MyNetworkClass", "get_my_model"]),
]

Identifier of the callable that returns a torch.nn.Module instance.

kwargs pydantic-field ¤

kwargs: Dict[str, YamlValue]

key word arguments for the callable

sha256 pydantic-field ¤

sha256: Optional[Sha256] = None

SHA256 hash value of the source file.

source pydantic-field ¤

source: Annotated[
    FileSource, AfterValidator(wo_special_file_name)
]

Architecture source file

suffix property ¤

suffix: str

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

download ¤

download(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

alias for .get_reader

Source code in src/bioimageio/spec/_internal/io.py
319
320
321
322
323
324
325
326
327
def download(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """alias for `.get_reader`"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

get_reader ¤

get_reader(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

open the file source (download if needed)

Source code in src/bioimageio/spec/_internal/io.py
309
310
311
312
313
314
315
316
317
def get_reader(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """open the file source (download if needed)"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

validate_sha256 ¤

validate_sha256(force_recompute: bool = False) -> None

validate the sha256 hash value of the source file

Source code in src/bioimageio/spec/_internal/io.py
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def validate_sha256(self, force_recompute: bool = False) -> None:
    """validate the sha256 hash value of the **source** file"""
    context = get_validation_context()
    src_str = str(self.source)
    if force_recompute:
        actual_sha = None
    else:
        actual_sha = context.known_files.get(src_str)

    if actual_sha is None:
        if context.perform_io_checks or force_recompute:
            reader = get_reader(self.source, sha256=self.sha256)
            if force_recompute:
                actual_sha = get_sha256(reader)
            else:
                actual_sha = reader.sha256

            context.known_files[src_str] = actual_sha
        elif context.known_files and src_str not in context.known_files:
            # perform_io_checks is False, but known files were given,
            # so we expect all file references to be in there
            raise ValueError(f"File {src_str} not found in `known_files`.")

    if actual_sha is None or self.sha256 == actual_sha:
        return
    elif self.sha256 is None or context.update_hashes:
        self.sha256 = actual_sha
    elif self.sha256 != actual_sha:
        raise ValueError(
            f"Sha256 mismatch for {self.source}. Expected {self.sha256}, got "
            + f"{actual_sha}. Update expected `sha256` or point to the matching "
            + "file."
        )

ArchitectureFromLibraryDescr pydantic-model ¤

Bases: _ArchitectureCallableDescr

Show JSON schema:
{
  "$defs": {
    "YamlValue": {
      "anyOf": [
        {
          "type": "boolean"
        },
        {
          "format": "date",
          "type": "string"
        },
        {
          "format": "date-time",
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        },
        {
          "type": "string"
        },
        {
          "items": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "array"
        },
        {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "object"
        },
        {
          "type": "null"
        }
      ]
    }
  },
  "additionalProperties": false,
  "properties": {
    "callable": {
      "description": "Identifier of the callable that returns a torch.nn.Module instance.",
      "examples": [
        "MyNetworkClass",
        "get_my_model"
      ],
      "minLength": 1,
      "title": "Identifier",
      "type": "string"
    },
    "kwargs": {
      "additionalProperties": {
        "$ref": "#/$defs/YamlValue"
      },
      "description": "key word arguments for the `callable`",
      "title": "Kwargs",
      "type": "object"
    },
    "import_from": {
      "description": "Where to import the callable from, i.e. `from <import_from> import <callable>`",
      "title": "Import From",
      "type": "string"
    }
  },
  "required": [
    "callable",
    "import_from"
  ],
  "title": "model.v0_5.ArchitectureFromLibraryDescr",
  "type": "object"
}

Fields:

callable pydantic-field ¤

callable: Annotated[
    Identifier,
    Field(examples=["MyNetworkClass", "get_my_model"]),
]

Identifier of the callable that returns a torch.nn.Module instance.

import_from pydantic-field ¤

import_from: str

Where to import the callable from, i.e. from <import_from> import <callable>

kwargs pydantic-field ¤

kwargs: Dict[str, YamlValue]

key word arguments for the callable

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

Author pydantic-model ¤

Bases: _Author_v0_2

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "affiliation": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Affiliation",
      "title": "Affiliation"
    },
    "email": {
      "anyOf": [
        {
          "format": "email",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Email",
      "title": "Email"
    },
    "orcid": {
      "anyOf": [
        {
          "description": "An ORCID identifier, see https://orcid.org/",
          "title": "OrcidId",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
      "examples": [
        "0000-0001-2345-6789"
      ],
      "title": "Orcid"
    },
    "name": {
      "title": "Name",
      "type": "string"
    },
    "github_user": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Github User"
    }
  },
  "required": [
    "name"
  ],
  "title": "generic.v0_3.Author",
  "type": "object"
}

Fields:

Validators:

affiliation pydantic-field ¤

affiliation: Optional[str] = None

Affiliation

email pydantic-field ¤

email: Optional[EmailStr] = None

Email

github_user pydantic-field ¤

github_user: Optional[str] = None

name pydantic-field ¤

name: Annotated[str, Predicate(_has_no_slash)]

orcid pydantic-field ¤

orcid: Annotated[
    Optional[OrcidId],
    Field(examples=["0000-0001-2345-6789"]),
] = None

An ORCID iD in hyphenated groups of 4 digits, (and valid as per ISO 7064 11,2.)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

AxisBase pydantic-model ¤

Bases: NodeWithExplicitlySetFields

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "id": {
      "description": "An axis id unique across all axes of one tensor.",
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A short description of this axis beyond its type and id.",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    }
  },
  "required": [
    "id"
  ],
  "title": "model.v0_5.AxisBase",
  "type": "object"
}

Fields:

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

A short description of this axis beyond its type and id.

id pydantic-field ¤

id: AxisId

An axis id unique across all axes of one tensor.

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

AxisId ¤

Bases: LowerCaseIdentifier


              flowchart TD
              bioimageio.spec.model.v0_5.AxisId[AxisId]
              bioimageio.spec._internal.types.LowerCaseIdentifier[LowerCaseIdentifier]
              bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]

                              bioimageio.spec._internal.types.LowerCaseIdentifier --> bioimageio.spec.model.v0_5.AxisId
                                bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.types.LowerCaseIdentifier
                



              click bioimageio.spec.model.v0_5.AxisId href "" "bioimageio.spec.model.v0_5.AxisId"
              click bioimageio.spec._internal.types.LowerCaseIdentifier href "" "bioimageio.spec._internal.types.LowerCaseIdentifier"
              click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
            

Methods:

Name Description
__get_pydantic_core_schema__
__get_pydantic_json_schema__
__new__

Attributes:

Name Type Description
root_model Type[RootModel[Any]]

the pydantic root model to validate the string

root_model class-attribute ¤

root_model: Type[RootModel[Any]] = RootModel[
    Annotated[
        LowerCaseIdentifierAnno,
        MaxLen(16),
        AfterValidator(_normalize_axis_id),
    ]
]

the pydantic root model to validate the string

__get_pydantic_core_schema__ classmethod ¤

__get_pydantic_core_schema__(
    source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29
30
31
32
33
@classmethod
def __get_pydantic_core_schema__(
    cls, source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema:
    return no_info_after_validator_function(cls, handler(str))

__get_pydantic_json_schema__ classmethod ¤

__get_pydantic_json_schema__(
    core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35
36
37
38
39
40
41
42
43
44
@classmethod
def __get_pydantic_json_schema__(
    cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue:
    json_schema = cls.root_model.model_json_schema(mode=handler.mode)
    json_schema["title"] = cls.__name__.strip("_")
    if cls.__doc__:
        json_schema["description"] = cls.__doc__

    return json_schema

__new__ ¤

__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19
20
21
22
23
def __new__(cls, object: object):
    _validated = cls.root_model.model_validate(object).root
    self = super().__new__(cls, _validated)
    self._validated = _validated
    return self._after_validator()

BadgeDescr pydantic-model ¤

Bases: Node

A custom badge

Show JSON schema:
{
  "$defs": {
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    }
  },
  "additionalProperties": false,
  "description": "A custom badge",
  "properties": {
    "label": {
      "description": "badge label to display on hover",
      "examples": [
        "Open in Colab"
      ],
      "title": "Label",
      "type": "string"
    },
    "icon": {
      "anyOf": [
        {
          "format": "file-path",
          "title": "FilePath",
          "type": "string"
        },
        {
          "$ref": "#/$defs/RelativeFilePath"
        },
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "badge icon (included in bioimage.io package if not a URL)",
      "examples": [
        "https://colab.research.google.com/assets/colab-badge.svg"
      ],
      "title": "Icon"
    },
    "url": {
      "description": "target URL",
      "examples": [
        "https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb"
      ],
      "format": "uri",
      "maxLength": 2083,
      "minLength": 1,
      "title": "HttpUrl",
      "type": "string"
    }
  },
  "required": [
    "label",
    "url"
  ],
  "title": "generic.v0_2.BadgeDescr",
  "type": "object"
}

Fields:

  • label (Annotated[str, Field(examples=[Open in Colab])])
  • icon (Annotated[Optional[Union[Annotated[Union[FilePath, RelativeFilePath], AfterValidator(wo_special_file_name), include_in_package], Union[HttpUrl, pydantic.HttpUrl]]], Field(examples=['https://colab.research.google.com/assets/colab-badge.svg'])])
  • url (Annotated[HttpUrl, Field(examples=['https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb'])])

icon pydantic-field ¤

icon: Annotated[
    Optional[
        Union[
            Annotated[
                Union[FilePath, RelativeFilePath],
                AfterValidator(wo_special_file_name),
                include_in_package,
            ],
            Union[HttpUrl, pydantic.HttpUrl],
        ]
    ],
    Field(
        examples=[
            "https://colab.research.google.com/assets/colab-badge.svg"
        ]
    ),
] = None

badge icon (included in bioimage.io package if not a URL)

label pydantic-field ¤

label: Annotated[str, Field(examples=[Open in Colab])]

badge label to display on hover

url pydantic-field ¤

url: Annotated[
    HttpUrl,
    Field(
        examples=[
            "https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb"
        ]
    ),
]

target URL

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

BatchAxis pydantic-model ¤

Bases: AxisBase

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "id": {
      "default": "batch",
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A short description of this axis beyond its type and id.",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "type": {
      "const": "batch",
      "title": "Type",
      "type": "string"
    },
    "size": {
      "anyOf": [
        {
          "const": 1,
          "type": "integer"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
      "title": "Size"
    }
  },
  "required": [
    "type"
  ],
  "title": "model.v0_5.BatchAxis",
  "type": "object"
}

Fields:

  • description (Annotated[str, MaxLen(128)])
  • type (Literal['batch'])
  • id (Annotated[AxisId, Predicate(_is_batch)])
  • size (Optional[Literal[1]])

concatenable property ¤

concatenable

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

A short description of this axis beyond its type and id.

id pydantic-field ¤

id: Annotated[AxisId, Predicate(_is_batch)] = BATCH_AXIS_ID

An axis id unique across all axes of one tensor.

implemented_type class-attribute ¤

implemented_type: Literal['batch'] = 'batch'

scale property ¤

scale

size pydantic-field ¤

size: Optional[Literal[1]] = None

The batch size may be fixed to 1, otherwise (the default) it may be chosen arbitrarily depending on available memory

type pydantic-field ¤

type: Literal['batch'] = 'batch'

unit property ¤

unit

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

BiasRisksLimitations pydantic-model ¤

Bases: Node

Known biases, risks, technical limitations, and recommendations for model use.

Show JSON schema:
{
  "additionalProperties": true,
  "description": "Known biases, risks, technical limitations, and recommendations for model use.",
  "properties": {
    "known_biases": {
      "default": "In general bioimage models may suffer from biases caused by:\n\n- Imaging protocol dependencies\n- Use of a specific cell type\n- Species-specific training data limitations\n\n",
      "description": "Biases in training data or model behavior.",
      "title": "Known Biases",
      "type": "string"
    },
    "risks": {
      "default": "Common risks in bioimage analysis include:\n\n- Erroneously assuming generalization to unseen experimental conditions\n- Trusting (overconfident) model outputs without validation\n- Misinterpretation of results\n\n",
      "description": "Potential risks in the context of bioimage analysis.",
      "title": "Risks",
      "type": "string"
    },
    "limitations": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Technical limitations and failure modes.",
      "title": "Limitations"
    },
    "recommendations": {
      "default": "Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
      "description": "Mitigation strategies regarding `known_biases`, `risks`, and `limitations`, as well as applicable best practices.\n\nConsider:\n- How to use a validation dataset?\n- How to manually validate?\n- Feasibility of domain adaptation for different experimental setups?",
      "title": "Recommendations",
      "type": "string"
    }
  },
  "title": "model.v0_5.BiasRisksLimitations",
  "type": "object"
}

Fields:

known_biases pydantic-field ¤

known_biases: str

Biases in training data or model behavior.

limitations pydantic-field ¤

limitations: Optional[str] = None

Technical limitations and failure modes.

recommendations pydantic-field ¤

recommendations: str = "Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model."

Mitigation strategies regarding known_biases, risks, and limitations, as well as applicable best practices.

Consider: - How to use a validation dataset? - How to manually validate? - Feasibility of domain adaptation for different experimental setups?

risks pydantic-field ¤

risks: str

Potential risks in the context of bioimage analysis.

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

format_md ¤

format_md() -> str
Source code in src/bioimageio/spec/model/v0_5.py
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
    def format_md(self) -> str:
        if self.limitations is None:
            limitations_header = ""
        else:
            limitations_header = "## Limitations\n\n"

        return f"""# Bias, Risks, and Limitations

{self.known_biases}

{self.risks}

{limitations_header}{self.limitations or ""}

## Recommendations

{self.recommendations}

"""

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

BinarizeAlongAxisKwargs pydantic-model ¤

Bases: KwargsNode

key word arguments for BinarizeDescr

Show JSON schema:
{
  "additionalProperties": false,
  "description": "key word arguments for [BinarizeDescr][]",
  "properties": {
    "threshold": {
      "description": "The fixed threshold values along `axis`",
      "items": {
        "type": "number"
      },
      "minItems": 1,
      "title": "Threshold",
      "type": "array"
    },
    "axis": {
      "description": "The `threshold` axis",
      "examples": [
        "channel"
      ],
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    }
  },
  "required": [
    "threshold",
    "axis"
  ],
  "title": "model.v0_5.BinarizeAlongAxisKwargs",
  "type": "object"
}

Fields:

axis pydantic-field ¤

axis: Annotated[NonBatchAxisId, Field(examples=["channel"])]

The threshold axis

threshold pydantic-field ¤

threshold: NotEmpty[List[float]]

The fixed threshold values along axis

__contains__ ¤

__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
459
460
def __contains__(self, item: str) -> bool:
    return item in self.__class__.model_fields

__getitem__ ¤

__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
453
454
455
456
457
def __getitem__(self, item: str) -> Any:
    if item in self.__class__.model_fields:
        return getattr(self, item)
    else:
        raise KeyError(item)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get ¤

get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
450
451
def get(self, item: str, default: Any = None) -> Any:
    return self[item] if item in self else default

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

BinarizeDescr pydantic-model ¤

Bases: NodeWithExplicitlySetFields

Binarize the tensor with a fixed threshold.

Values above BinarizeKwargs.threshold/BinarizeAlongAxisKwargs.threshold will be set to one, values below the threshold to zero.

Examples:

  • in YAML
    postprocessing:
      - id: binarize
        kwargs:
          axis: 'channel'
          threshold: [0.25, 0.5, 0.75]
    
  • in Python:

    >>> postprocessing = [BinarizeDescr(
    ...   kwargs=BinarizeAlongAxisKwargs(
    ...       axis=AxisId('channel'),
    ...       threshold=[0.25, 0.5, 0.75],
    ...   )
    ... )]
    
Show JSON schema:
{
  "$defs": {
    "BinarizeAlongAxisKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [BinarizeDescr][]",
      "properties": {
        "threshold": {
          "description": "The fixed threshold values along `axis`",
          "items": {
            "type": "number"
          },
          "minItems": 1,
          "title": "Threshold",
          "type": "array"
        },
        "axis": {
          "description": "The `threshold` axis",
          "examples": [
            "channel"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        }
      },
      "required": [
        "threshold",
        "axis"
      ],
      "title": "model.v0_5.BinarizeAlongAxisKwargs",
      "type": "object"
    },
    "BinarizeKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [BinarizeDescr][]",
      "properties": {
        "threshold": {
          "description": "The fixed threshold",
          "title": "Threshold",
          "type": "number"
        }
      },
      "required": [
        "threshold"
      ],
      "title": "model.v0_5.BinarizeKwargs",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "description": "Binarize the tensor with a fixed threshold.\n\nValues above [BinarizeKwargs.threshold][]/[BinarizeAlongAxisKwargs.threshold][]\nwill be set to one, values below the threshold to zero.\n\nExamples:\n- in YAML\n    ```yaml\n    postprocessing:\n      - id: binarize\n        kwargs:\n          axis: 'channel'\n          threshold: [0.25, 0.5, 0.75]\n    ```\n- in Python:\n\n    >>> postprocessing = [BinarizeDescr(\n    ...   kwargs=BinarizeAlongAxisKwargs(\n    ...       axis=AxisId('channel'),\n    ...       threshold=[0.25, 0.5, 0.75],\n    ...   )\n    ... )]",
  "properties": {
    "id": {
      "const": "binarize",
      "title": "Id",
      "type": "string"
    },
    "kwargs": {
      "anyOf": [
        {
          "$ref": "#/$defs/BinarizeKwargs"
        },
        {
          "$ref": "#/$defs/BinarizeAlongAxisKwargs"
        }
      ],
      "title": "Kwargs"
    }
  },
  "required": [
    "id",
    "kwargs"
  ],
  "title": "model.v0_5.BinarizeDescr",
  "type": "object"
}

Fields:

id pydantic-field ¤

id: Literal['binarize'] = 'binarize'

implemented_id class-attribute ¤

implemented_id: Literal['binarize'] = 'binarize'

kwargs pydantic-field ¤

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

BinarizeKwargs pydantic-model ¤

Bases: KwargsNode

key word arguments for BinarizeDescr

Show JSON schema:
{
  "additionalProperties": false,
  "description": "key word arguments for [BinarizeDescr][]",
  "properties": {
    "threshold": {
      "description": "The fixed threshold",
      "title": "Threshold",
      "type": "number"
    }
  },
  "required": [
    "threshold"
  ],
  "title": "model.v0_5.BinarizeKwargs",
  "type": "object"
}

Fields:

threshold pydantic-field ¤

threshold: float

The fixed threshold

__contains__ ¤

__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
459
460
def __contains__(self, item: str) -> bool:
    return item in self.__class__.model_fields

__getitem__ ¤

__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
453
454
455
456
457
def __getitem__(self, item: str) -> Any:
    if item in self.__class__.model_fields:
        return getattr(self, item)
    else:
        raise KeyError(item)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get ¤

get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
450
451
def get(self, item: str, default: Any = None) -> Any:
    return self[item] if item in self else default

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

BioimageioConfig pydantic-model ¤

Bases: Node

Show JSON schema:
{
  "$defs": {
    "BiasRisksLimitations": {
      "additionalProperties": true,
      "description": "Known biases, risks, technical limitations, and recommendations for model use.",
      "properties": {
        "known_biases": {
          "default": "In general bioimage models may suffer from biases caused by:\n\n- Imaging protocol dependencies\n- Use of a specific cell type\n- Species-specific training data limitations\n\n",
          "description": "Biases in training data or model behavior.",
          "title": "Known Biases",
          "type": "string"
        },
        "risks": {
          "default": "Common risks in bioimage analysis include:\n\n- Erroneously assuming generalization to unseen experimental conditions\n- Trusting (overconfident) model outputs without validation\n- Misinterpretation of results\n\n",
          "description": "Potential risks in the context of bioimage analysis.",
          "title": "Risks",
          "type": "string"
        },
        "limitations": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Technical limitations and failure modes.",
          "title": "Limitations"
        },
        "recommendations": {
          "default": "Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
          "description": "Mitigation strategies regarding `known_biases`, `risks`, and `limitations`, as well as applicable best practices.\n\nConsider:\n- How to use a validation dataset?\n- How to manually validate?\n- Feasibility of domain adaptation for different experimental setups?",
          "title": "Recommendations",
          "type": "string"
        }
      },
      "title": "model.v0_5.BiasRisksLimitations",
      "type": "object"
    },
    "EnvironmentalImpact": {
      "additionalProperties": true,
      "description": "Environmental considerations for model training and deployment.\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
      "properties": {
        "hardware_type": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "GPU/CPU specifications",
          "title": "Hardware Type"
        },
        "hours_used": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Total compute hours",
          "title": "Hours Used"
        },
        "cloud_provider": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "If applicable",
          "title": "Cloud Provider"
        },
        "compute_region": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Geographic location",
          "title": "Compute Region"
        },
        "co2_emitted": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "kg CO2 equivalent\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
          "title": "Co2 Emitted"
        }
      },
      "title": "model.v0_5.EnvironmentalImpact",
      "type": "object"
    },
    "Evaluation": {
      "additionalProperties": true,
      "properties": {
        "model_id": {
          "anyOf": [
            {
              "minLength": 1,
              "title": "ModelId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Model being evaluated.",
          "title": "Model Id"
        },
        "dataset_id": {
          "description": "Dataset used for evaluation.",
          "minLength": 1,
          "title": "DatasetId",
          "type": "string"
        },
        "dataset_source": {
          "description": "Source of the dataset.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        "dataset_role": {
          "description": "Role of the dataset used for evaluation.\n\n- `train`: dataset was (part of) the training data\n- `validation`: dataset was (part of) the validation data used during training, e.g. used for model selection or hyperparameter tuning\n- `test`: dataset was (part of) the designated test data; not used during training or validation, but acquired from the same source/distribution as training data\n- `independent`: dataset is entirely independent test data; not used during training or validation, and acquired from a different source/distribution than training data\n- `unknown`: role of the dataset is unknown; choose this if you are not certain if (a subset) of the data was seen by the model during training.",
          "enum": [
            "train",
            "validation",
            "test",
            "independent",
            "unknown"
          ],
          "title": "Dataset Role",
          "type": "string"
        },
        "sample_count": {
          "description": "Number of evaluated samples.",
          "title": "Sample Count",
          "type": "integer"
        },
        "evaluation_factors": {
          "description": "(Abbreviations of) each evaluation factor.\n\nEvaluation factors are criteria along which model performance is evaluated, e.g. different image conditions\nlike 'low SNR', 'high cell density', or different biological conditions like 'cell type A', 'cell type B'.\nAn 'overall' factor may be included to summarize performance across all conditions.",
          "items": {
            "maxLength": 16,
            "type": "string"
          },
          "title": "Evaluation Factors",
          "type": "array"
        },
        "evaluation_factors_long": {
          "description": "Descriptions (long form) of each evaluation factor.",
          "items": {
            "type": "string"
          },
          "title": "Evaluation Factors Long",
          "type": "array"
        },
        "metrics": {
          "description": "(Abbreviations of) metrics used for evaluation.",
          "items": {
            "maxLength": 16,
            "type": "string"
          },
          "title": "Metrics",
          "type": "array"
        },
        "metrics_long": {
          "description": "Description of each metric used.",
          "items": {
            "type": "string"
          },
          "title": "Metrics Long",
          "type": "array"
        },
        "results": {
          "description": "Results for each metric (rows; outer list) and each evaluation factor (columns; inner list).",
          "items": {
            "items": {
              "anyOf": [
                {
                  "type": "string"
                },
                {
                  "type": "number"
                },
                {
                  "type": "integer"
                }
              ]
            },
            "type": "array"
          },
          "title": "Results",
          "type": "array"
        },
        "results_summary": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Interpretation of results for general audience.\n\nConsider:\n    - Overall model performance\n    - Comparison to existing methods\n    - Limitations and areas for improvement",
          "title": "Results Summary"
        }
      },
      "required": [
        "dataset_id",
        "dataset_source",
        "dataset_role",
        "sample_count",
        "evaluation_factors",
        "evaluation_factors_long",
        "metrics",
        "metrics_long",
        "results"
      ],
      "title": "model.v0_5.Evaluation",
      "type": "object"
    },
    "ReproducibilityTolerance": {
      "additionalProperties": true,
      "description": "Describes what small numerical differences -- if any -- may be tolerated\nin the generated output when executing in different environments.\n\nA tensor element *output* is considered mismatched to the **test_tensor** if\nabs(*output* - **test_tensor**) > **absolute_tolerance** + **relative_tolerance** * abs(**test_tensor**).\n(Internally we call [numpy.testing.assert_allclose](https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html).)\n\nMotivation:\n    For testing we can request the respective deep learning frameworks to be as\n    reproducible as possible by setting seeds and chosing deterministic algorithms,\n    but differences in operating systems, available hardware and installed drivers\n    may still lead to numerical differences.",
      "properties": {
        "relative_tolerance": {
          "default": 0.001,
          "description": "Maximum relative tolerance of reproduced test tensor.",
          "maximum": 0.01,
          "minimum": 0,
          "title": "Relative Tolerance",
          "type": "number"
        },
        "absolute_tolerance": {
          "default": 0.001,
          "description": "Maximum absolute tolerance of reproduced test tensor.",
          "minimum": 0,
          "title": "Absolute Tolerance",
          "type": "number"
        },
        "mismatched_elements_per_million": {
          "default": 100,
          "description": "Maximum number of mismatched elements/pixels per million to tolerate.",
          "maximum": 1000,
          "minimum": 0,
          "title": "Mismatched Elements Per Million",
          "type": "integer"
        },
        "output_ids": {
          "default": [],
          "description": "Limits the output tensor IDs these reproducibility details apply to.",
          "items": {
            "maxLength": 32,
            "minLength": 1,
            "title": "TensorId",
            "type": "string"
          },
          "title": "Output Ids",
          "type": "array"
        },
        "weights_formats": {
          "default": [],
          "description": "Limits the weights formats these details apply to.",
          "items": {
            "enum": [
              "keras_hdf5",
              "keras_v3",
              "onnx",
              "pytorch_state_dict",
              "tensorflow_js",
              "tensorflow_saved_model_bundle",
              "torchscript"
            ],
            "type": "string"
          },
          "title": "Weights Formats",
          "type": "array"
        }
      },
      "title": "model.v0_5.ReproducibilityTolerance",
      "type": "object"
    },
    "TrainingDetails": {
      "additionalProperties": true,
      "properties": {
        "training_preprocessing": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Detailed image preprocessing steps during model training:\n\nMention:\n- *Normalization methods*\n- *Augmentation strategies*\n- *Resizing/resampling procedures*\n- *Artifact handling*",
          "title": "Training Preprocessing"
        },
        "training_epochs": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Number of training epochs.",
          "title": "Training Epochs"
        },
        "training_batch_size": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Batch size used in training.",
          "title": "Training Batch Size"
        },
        "initial_learning_rate": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Initial learning rate used in training.",
          "title": "Initial Learning Rate"
        },
        "learning_rate_schedule": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Learning rate schedule used in training.",
          "title": "Learning Rate Schedule"
        },
        "loss_function": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Loss function used in training, e.g. nn.MSELoss.",
          "title": "Loss Function"
        },
        "loss_function_kwargs": {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "description": "key word arguments for the `loss_function`",
          "title": "Loss Function Kwargs",
          "type": "object"
        },
        "optimizer": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "optimizer, e.g. torch.optim.Adam",
          "title": "Optimizer"
        },
        "optimizer_kwargs": {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "description": "key word arguments for the `optimizer`",
          "title": "Optimizer Kwargs",
          "type": "object"
        },
        "regularization": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Regularization techniques used during training, e.g. drop-out or weight decay.",
          "title": "Regularization"
        },
        "training_duration": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Total training duration in hours.",
          "title": "Training Duration"
        }
      },
      "title": "model.v0_5.TrainingDetails",
      "type": "object"
    },
    "YamlValue": {
      "anyOf": [
        {
          "type": "boolean"
        },
        {
          "format": "date",
          "type": "string"
        },
        {
          "format": "date-time",
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        },
        {
          "type": "string"
        },
        {
          "items": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "array"
        },
        {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "object"
        },
        {
          "type": "null"
        }
      ]
    }
  },
  "additionalProperties": true,
  "properties": {
    "reproducibility_tolerance": {
      "default": [],
      "description": "Tolerances to allow when reproducing the model's test outputs\nfrom the model's test inputs.\nOnly the first entry matching tensor id and weights format is considered.",
      "items": {
        "$ref": "#/$defs/ReproducibilityTolerance"
      },
      "title": "Reproducibility Tolerance",
      "type": "array"
    },
    "funded_by": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Funding agency, grant number if applicable",
      "title": "Funded By"
    },
    "architecture_type": {
      "anyOf": [
        {
          "maxLength": 32,
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Model architecture type, e.g., 3D U-Net, ResNet, transformer",
      "title": "Architecture Type"
    },
    "architecture_description": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Text description of model architecture.",
      "title": "Architecture Description"
    },
    "modality": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Input modality, e.g., fluorescence microscopy, electron microscopy",
      "title": "Modality"
    },
    "target_structure": {
      "description": "Biological structure(s) the model is designed to analyze, e.g., nuclei, mitochondria, cells",
      "items": {
        "type": "string"
      },
      "title": "Target Structure",
      "type": "array"
    },
    "task": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Bioimage-specific task type, e.g., segmentation, classification, detection, denoising",
      "title": "Task"
    },
    "new_version": {
      "anyOf": [
        {
          "minLength": 1,
          "title": "ModelId",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "A new version of this model exists with a different model id.",
      "title": "New Version"
    },
    "out_of_scope_use": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Describe how the model may be misused in bioimage analysis contexts and what users should **not** do with the model.",
      "title": "Out Of Scope Use"
    },
    "bias_risks_limitations": {
      "$ref": "#/$defs/BiasRisksLimitations",
      "description": "Description of known bias, risks, and technical limitations for in-scope model use."
    },
    "model_parameter_count": {
      "anyOf": [
        {
          "type": "integer"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Total number of model parameters.",
      "title": "Model Parameter Count"
    },
    "training": {
      "$ref": "#/$defs/TrainingDetails",
      "description": "Details on how the model was trained."
    },
    "inference_time": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Average inference time per image/tile. Specify hardware and image size. Multiple examples can be given.",
      "title": "Inference Time"
    },
    "memory_requirements_inference": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "GPU memory needed for inference. Multiple examples with different image size can be given.",
      "title": "Memory Requirements Inference"
    },
    "memory_requirements_training": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "GPU memory needed for training. Multiple examples with different image/batch sizes can be given.",
      "title": "Memory Requirements Training"
    },
    "evaluations": {
      "description": "Quantitative model evaluations.\n\nNote:\n    At the moment we recommend to include only a single test dataset\n    (with evaluation factors that may mark subsets of the dataset)\n    to avoid confusion and make the presentation of results cleaner.",
      "items": {
        "$ref": "#/$defs/Evaluation"
      },
      "title": "Evaluations",
      "type": "array"
    },
    "environmental_impact": {
      "$ref": "#/$defs/EnvironmentalImpact",
      "description": "Environmental considerations for model training and deployment"
    }
  },
  "title": "model.v0_5.BioimageioConfig",
  "type": "object"
}

Fields:

architecture_description pydantic-field ¤

architecture_description: Optional[str] = None

Text description of model architecture.

architecture_type pydantic-field ¤

architecture_type: Optional[Annotated[str, MaxLen(32)]] = (
    None
)

Model architecture type, e.g., 3D U-Net, ResNet, transformer

bias_risks_limitations pydantic-field ¤

bias_risks_limitations: BiasRisksLimitations

Description of known bias, risks, and technical limitations for in-scope model use.

environmental_impact pydantic-field ¤

environmental_impact: EnvironmentalImpact

Environmental considerations for model training and deployment

evaluations pydantic-field ¤

evaluations: List[Evaluation]

Quantitative model evaluations.

Note

At the moment we recommend to include only a single test dataset (with evaluation factors that may mark subsets of the dataset) to avoid confusion and make the presentation of results cleaner.

funded_by pydantic-field ¤

funded_by: Optional[str] = None

Funding agency, grant number if applicable

inference_time pydantic-field ¤

inference_time: Optional[str] = None

Average inference time per image/tile. Specify hardware and image size. Multiple examples can be given.

memory_requirements_inference pydantic-field ¤

memory_requirements_inference: Optional[str] = None

GPU memory needed for inference. Multiple examples with different image size can be given.

memory_requirements_training pydantic-field ¤

memory_requirements_training: Optional[str] = None

GPU memory needed for training. Multiple examples with different image/batch sizes can be given.

modality pydantic-field ¤

modality: Optional[str] = None

Input modality, e.g., fluorescence microscopy, electron microscopy

model_parameter_count pydantic-field ¤

model_parameter_count: Optional[int] = None

Total number of model parameters.

new_version pydantic-field ¤

new_version: Optional[ModelId] = None

A new version of this model exists with a different model id.

out_of_scope_use pydantic-field ¤

out_of_scope_use: Optional[str] = None

Describe how the model may be misused in bioimage analysis contexts and what users should not do with the model.

reproducibility_tolerance pydantic-field ¤

reproducibility_tolerance: Sequence[
    ReproducibilityTolerance
] = ()

Tolerances to allow when reproducing the model's test outputs from the model's test inputs. Only the first entry matching tensor id and weights format is considered.

target_structure pydantic-field ¤

target_structure: List[str]

Biological structure(s) the model is designed to analyze, e.g., nuclei, mitochondria, cells

task pydantic-field ¤

task: Optional[str] = None

Bioimage-specific task type, e.g., segmentation, classification, detection, denoising

training pydantic-field ¤

training: TrainingDetails

Details on how the model was trained.

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

CallableFromDepencency ¤

Bases: ValidatedStringWithInnerNode[CallableFromDepencencyNode]


              flowchart TD
              bioimageio.spec.model.v0_5.CallableFromDepencency[CallableFromDepencency]
              bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode[ValidatedStringWithInnerNode]
              bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]

                              bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode --> bioimageio.spec.model.v0_5.CallableFromDepencency
                                bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode
                



              click bioimageio.spec.model.v0_5.CallableFromDepencency href "" "bioimageio.spec.model.v0_5.CallableFromDepencency"
              click bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode href "" "bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode"
              click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
            

Methods:

Name Description
__get_pydantic_core_schema__
__get_pydantic_json_schema__
__new__

Attributes:

Name Type Description
callable_name

The callable Python identifier implemented in module module_name.

module_name

The Python module that implements callable_name.

root_model Type[RootModel[Any]]

the pydantic root model to validate the string

callable_name property ¤

callable_name

The callable Python identifier implemented in module module_name.

module_name property ¤

module_name

The Python module that implements callable_name.

root_model class-attribute ¤

root_model: Type[RootModel[Any]] = RootModel[
    Annotated[
        str,
        StringConstraints(
            strip_whitespace=True, pattern="^.+\\..+$"
        ),
    ]
]

the pydantic root model to validate the string

__get_pydantic_core_schema__ classmethod ¤

__get_pydantic_core_schema__(
    source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29
30
31
32
33
@classmethod
def __get_pydantic_core_schema__(
    cls, source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema:
    return no_info_after_validator_function(cls, handler(str))

__get_pydantic_json_schema__ classmethod ¤

__get_pydantic_json_schema__(
    core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35
36
37
38
39
40
41
42
43
44
@classmethod
def __get_pydantic_json_schema__(
    cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue:
    json_schema = cls.root_model.model_json_schema(mode=handler.mode)
    json_schema["title"] = cls.__name__.strip("_")
    if cls.__doc__:
        json_schema["description"] = cls.__doc__

    return json_schema

__new__ ¤

__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19
20
21
22
23
def __new__(cls, object: object):
    _validated = cls.root_model.model_validate(object).root
    self = super().__new__(cls, _validated)
    self._validated = _validated
    return self._after_validator()

ChannelAxis pydantic-model ¤

Bases: AxisBase

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "id": {
      "default": "channel",
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A short description of this axis beyond its type and id.",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "type": {
      "const": "channel",
      "title": "Type",
      "type": "string"
    },
    "channel_names": {
      "items": {
        "minLength": 1,
        "title": "Identifier",
        "type": "string"
      },
      "minItems": 1,
      "title": "Channel Names",
      "type": "array"
    }
  },
  "required": [
    "type",
    "channel_names"
  ],
  "title": "model.v0_5.ChannelAxis",
  "type": "object"
}

Fields:

channel_names pydantic-field ¤

channel_names: NotEmpty[List[Identifier]]

concatenable property ¤

concatenable

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

A short description of this axis beyond its type and id.

id pydantic-field ¤

An axis id unique across all axes of one tensor.

implemented_type class-attribute ¤

implemented_type: Literal['channel'] = 'channel'

scale property ¤

scale: float

size property ¤

size: int

type pydantic-field ¤

type: Literal['channel'] = 'channel'

unit property ¤

unit

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

CiteEntry pydantic-model ¤

Bases: Node

A citation that should be referenced in work using this resource.

Show JSON schema:
{
  "additionalProperties": false,
  "description": "A citation that should be referenced in work using this resource.",
  "properties": {
    "text": {
      "description": "free text description",
      "title": "Text",
      "type": "string"
    },
    "doi": {
      "anyOf": [
        {
          "description": "A digital object identifier, see https://www.doi.org/",
          "pattern": "^10\\.[0-9]{4}.+$",
          "title": "Doi",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details.\nNote:\n    Either **doi** or **url** have to be specified.",
      "title": "Doi"
    },
    "url": {
      "anyOf": [
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "URL to cite (preferably specify a **doi** instead/also).\nNote:\n    Either **doi** or **url** have to be specified.",
      "title": "Url"
    }
  },
  "required": [
    "text"
  ],
  "title": "generic.v0_3.CiteEntry",
  "type": "object"
}

Fields:

Validators:

  • _check_doi_or_url

doi pydantic-field ¤

doi: Optional[Doi] = None

A digital object identifier (DOI) is the prefered citation reference. See https://www.doi.org/ for details. Note: Either doi or url have to be specified.

text pydantic-field ¤

text: str

free text description

url pydantic-field ¤

url: Optional[HttpUrl] = None

URL to cite (preferably specify a doi instead/also). Note: Either doi or url have to be specified.

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

ClipDescr pydantic-model ¤

Bases: NodeWithExplicitlySetFields

Set tensor values below min to min and above max to max.

See ScaleRangeDescr for examples.

Show JSON schema:
{
  "$defs": {
    "ClipKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ClipDescr][]",
      "properties": {
        "min": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Minimum value for clipping.\n\nExclusive with [min_percentile][]",
          "title": "Min"
        },
        "min_percentile": {
          "anyOf": [
            {
              "exclusiveMaximum": 100,
              "minimum": 0,
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Minimum percentile for clipping.\n\nExclusive with [min][].\n\nIn range [0, 100).",
          "title": "Min Percentile"
        },
        "max": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Maximum value for clipping.\n\nExclusive with `max_percentile`.",
          "title": "Max"
        },
        "max_percentile": {
          "anyOf": [
            {
              "exclusiveMinimum": 1,
              "maximum": 100,
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Maximum percentile for clipping.\n\nExclusive with `max`.\n\nIn range (1, 100].",
          "title": "Max Percentile"
        },
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to determine percentiles jointly,\n\ni.e. axes to reduce to compute min/max from `min_percentile`/`max_percentile`.\nFor example to clip 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape with clipped values per channel, specify `axes=('batch', 'x', 'y')`.\nTo clip samples independently, leave out the 'batch' axis.\n\nOnly valid if `min_percentile` and/or `max_percentile` are set.\n\nDefault: Compute percentiles over all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        }
      },
      "title": "model.v0_5.ClipKwargs",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "description": "Set tensor values below min to min and above max to max.\n\nSee `ScaleRangeDescr` for examples.",
  "properties": {
    "id": {
      "const": "clip",
      "title": "Id",
      "type": "string"
    },
    "kwargs": {
      "$ref": "#/$defs/ClipKwargs"
    }
  },
  "required": [
    "id",
    "kwargs"
  ],
  "title": "model.v0_5.ClipDescr",
  "type": "object"
}

Fields:

id pydantic-field ¤

id: Literal['clip'] = 'clip'

implemented_id class-attribute ¤

implemented_id: Literal['clip'] = 'clip'

kwargs pydantic-field ¤

kwargs: ClipKwargs

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

ClipKwargs pydantic-model ¤

Bases: KwargsNode

key word arguments for ClipDescr

Show JSON schema:
{
  "additionalProperties": false,
  "description": "key word arguments for [ClipDescr][]",
  "properties": {
    "min": {
      "anyOf": [
        {
          "type": "number"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Minimum value for clipping.\n\nExclusive with [min_percentile][]",
      "title": "Min"
    },
    "min_percentile": {
      "anyOf": [
        {
          "exclusiveMaximum": 100,
          "minimum": 0,
          "type": "number"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Minimum percentile for clipping.\n\nExclusive with [min][].\n\nIn range [0, 100).",
      "title": "Min Percentile"
    },
    "max": {
      "anyOf": [
        {
          "type": "number"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Maximum value for clipping.\n\nExclusive with `max_percentile`.",
      "title": "Max"
    },
    "max_percentile": {
      "anyOf": [
        {
          "exclusiveMinimum": 1,
          "maximum": 100,
          "type": "number"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Maximum percentile for clipping.\n\nExclusive with `max`.\n\nIn range (1, 100].",
      "title": "Max Percentile"
    },
    "axes": {
      "anyOf": [
        {
          "items": {
            "maxLength": 16,
            "minLength": 1,
            "title": "AxisId",
            "type": "string"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The subset of axes to determine percentiles jointly,\n\ni.e. axes to reduce to compute min/max from `min_percentile`/`max_percentile`.\nFor example to clip 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape with clipped values per channel, specify `axes=('batch', 'x', 'y')`.\nTo clip samples independently, leave out the 'batch' axis.\n\nOnly valid if `min_percentile` and/or `max_percentile` are set.\n\nDefault: Compute percentiles over all axes jointly.",
      "examples": [
        [
          "batch",
          "x",
          "y"
        ]
      ],
      "title": "Axes"
    }
  },
  "title": "model.v0_5.ClipKwargs",
  "type": "object"
}

Fields:

  • min (Optional[float])
  • min_percentile (Optional[Annotated[float, Interval(ge=0, lt=100)]])
  • max (Optional[float])
  • max_percentile (Optional[Annotated[float, Interval(gt=1, le=100)]])
  • axes (Annotated[Optional[Sequence[AxisId]], Field(examples=[('batch', 'x', 'y')])])

Validators:

  • _validate

axes pydantic-field ¤

axes: Annotated[
    Optional[Sequence[AxisId]],
    Field(examples=[("batch", "x", "y")]),
] = None

The subset of axes to determine percentiles jointly,

i.e. axes to reduce to compute min/max from min_percentile/max_percentile. For example to clip 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x') resulting in a tensor of equal shape with clipped values per channel, specify axes=('batch', 'x', 'y'). To clip samples independently, leave out the 'batch' axis.

Only valid if min_percentile and/or max_percentile are set.

Default: Compute percentiles over all axes jointly.

max pydantic-field ¤

max: Optional[float] = None

Maximum value for clipping.

Exclusive with max_percentile.

max_percentile pydantic-field ¤

max_percentile: Optional[
    Annotated[float, Interval(gt=1, le=100)]
] = None

Maximum percentile for clipping.

Exclusive with max.

In range (1, 100].

min pydantic-field ¤

min: Optional[float] = None

Minimum value for clipping.

Exclusive with min_percentile

min_percentile pydantic-field ¤

min_percentile: Optional[
    Annotated[float, Interval(ge=0, lt=100)]
] = None

Minimum percentile for clipping.

Exclusive with min.

In range [0, 100).

__contains__ ¤

__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
459
460
def __contains__(self, item: str) -> bool:
    return item in self.__class__.model_fields

__getitem__ ¤

__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
453
454
455
456
457
def __getitem__(self, item: str) -> Any:
    if item in self.__class__.model_fields:
        return getattr(self, item)
    else:
        raise KeyError(item)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get ¤

get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
450
451
def get(self, item: str, default: Any = None) -> Any:
    return self[item] if item in self else default

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

Config pydantic-model ¤

Bases: Node

Show JSON schema:
{
  "$defs": {
    "BiasRisksLimitations": {
      "additionalProperties": true,
      "description": "Known biases, risks, technical limitations, and recommendations for model use.",
      "properties": {
        "known_biases": {
          "default": "In general bioimage models may suffer from biases caused by:\n\n- Imaging protocol dependencies\n- Use of a specific cell type\n- Species-specific training data limitations\n\n",
          "description": "Biases in training data or model behavior.",
          "title": "Known Biases",
          "type": "string"
        },
        "risks": {
          "default": "Common risks in bioimage analysis include:\n\n- Erroneously assuming generalization to unseen experimental conditions\n- Trusting (overconfident) model outputs without validation\n- Misinterpretation of results\n\n",
          "description": "Potential risks in the context of bioimage analysis.",
          "title": "Risks",
          "type": "string"
        },
        "limitations": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Technical limitations and failure modes.",
          "title": "Limitations"
        },
        "recommendations": {
          "default": "Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
          "description": "Mitigation strategies regarding `known_biases`, `risks`, and `limitations`, as well as applicable best practices.\n\nConsider:\n- How to use a validation dataset?\n- How to manually validate?\n- Feasibility of domain adaptation for different experimental setups?",
          "title": "Recommendations",
          "type": "string"
        }
      },
      "title": "model.v0_5.BiasRisksLimitations",
      "type": "object"
    },
    "BioimageioConfig": {
      "additionalProperties": true,
      "properties": {
        "reproducibility_tolerance": {
          "default": [],
          "description": "Tolerances to allow when reproducing the model's test outputs\nfrom the model's test inputs.\nOnly the first entry matching tensor id and weights format is considered.",
          "items": {
            "$ref": "#/$defs/ReproducibilityTolerance"
          },
          "title": "Reproducibility Tolerance",
          "type": "array"
        },
        "funded_by": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Funding agency, grant number if applicable",
          "title": "Funded By"
        },
        "architecture_type": {
          "anyOf": [
            {
              "maxLength": 32,
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Model architecture type, e.g., 3D U-Net, ResNet, transformer",
          "title": "Architecture Type"
        },
        "architecture_description": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Text description of model architecture.",
          "title": "Architecture Description"
        },
        "modality": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Input modality, e.g., fluorescence microscopy, electron microscopy",
          "title": "Modality"
        },
        "target_structure": {
          "description": "Biological structure(s) the model is designed to analyze, e.g., nuclei, mitochondria, cells",
          "items": {
            "type": "string"
          },
          "title": "Target Structure",
          "type": "array"
        },
        "task": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Bioimage-specific task type, e.g., segmentation, classification, detection, denoising",
          "title": "Task"
        },
        "new_version": {
          "anyOf": [
            {
              "minLength": 1,
              "title": "ModelId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "A new version of this model exists with a different model id.",
          "title": "New Version"
        },
        "out_of_scope_use": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Describe how the model may be misused in bioimage analysis contexts and what users should **not** do with the model.",
          "title": "Out Of Scope Use"
        },
        "bias_risks_limitations": {
          "$ref": "#/$defs/BiasRisksLimitations",
          "description": "Description of known bias, risks, and technical limitations for in-scope model use."
        },
        "model_parameter_count": {
          "anyOf": [
            {
              "type": "integer"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Total number of model parameters.",
          "title": "Model Parameter Count"
        },
        "training": {
          "$ref": "#/$defs/TrainingDetails",
          "description": "Details on how the model was trained."
        },
        "inference_time": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Average inference time per image/tile. Specify hardware and image size. Multiple examples can be given.",
          "title": "Inference Time"
        },
        "memory_requirements_inference": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "GPU memory needed for inference. Multiple examples with different image size can be given.",
          "title": "Memory Requirements Inference"
        },
        "memory_requirements_training": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "GPU memory needed for training. Multiple examples with different image/batch sizes can be given.",
          "title": "Memory Requirements Training"
        },
        "evaluations": {
          "description": "Quantitative model evaluations.\n\nNote:\n    At the moment we recommend to include only a single test dataset\n    (with evaluation factors that may mark subsets of the dataset)\n    to avoid confusion and make the presentation of results cleaner.",
          "items": {
            "$ref": "#/$defs/Evaluation"
          },
          "title": "Evaluations",
          "type": "array"
        },
        "environmental_impact": {
          "$ref": "#/$defs/EnvironmentalImpact",
          "description": "Environmental considerations for model training and deployment"
        }
      },
      "title": "model.v0_5.BioimageioConfig",
      "type": "object"
    },
    "EnvironmentalImpact": {
      "additionalProperties": true,
      "description": "Environmental considerations for model training and deployment.\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
      "properties": {
        "hardware_type": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "GPU/CPU specifications",
          "title": "Hardware Type"
        },
        "hours_used": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Total compute hours",
          "title": "Hours Used"
        },
        "cloud_provider": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "If applicable",
          "title": "Cloud Provider"
        },
        "compute_region": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Geographic location",
          "title": "Compute Region"
        },
        "co2_emitted": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "kg CO2 equivalent\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
          "title": "Co2 Emitted"
        }
      },
      "title": "model.v0_5.EnvironmentalImpact",
      "type": "object"
    },
    "Evaluation": {
      "additionalProperties": true,
      "properties": {
        "model_id": {
          "anyOf": [
            {
              "minLength": 1,
              "title": "ModelId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Model being evaluated.",
          "title": "Model Id"
        },
        "dataset_id": {
          "description": "Dataset used for evaluation.",
          "minLength": 1,
          "title": "DatasetId",
          "type": "string"
        },
        "dataset_source": {
          "description": "Source of the dataset.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        "dataset_role": {
          "description": "Role of the dataset used for evaluation.\n\n- `train`: dataset was (part of) the training data\n- `validation`: dataset was (part of) the validation data used during training, e.g. used for model selection or hyperparameter tuning\n- `test`: dataset was (part of) the designated test data; not used during training or validation, but acquired from the same source/distribution as training data\n- `independent`: dataset is entirely independent test data; not used during training or validation, and acquired from a different source/distribution than training data\n- `unknown`: role of the dataset is unknown; choose this if you are not certain if (a subset) of the data was seen by the model during training.",
          "enum": [
            "train",
            "validation",
            "test",
            "independent",
            "unknown"
          ],
          "title": "Dataset Role",
          "type": "string"
        },
        "sample_count": {
          "description": "Number of evaluated samples.",
          "title": "Sample Count",
          "type": "integer"
        },
        "evaluation_factors": {
          "description": "(Abbreviations of) each evaluation factor.\n\nEvaluation factors are criteria along which model performance is evaluated, e.g. different image conditions\nlike 'low SNR', 'high cell density', or different biological conditions like 'cell type A', 'cell type B'.\nAn 'overall' factor may be included to summarize performance across all conditions.",
          "items": {
            "maxLength": 16,
            "type": "string"
          },
          "title": "Evaluation Factors",
          "type": "array"
        },
        "evaluation_factors_long": {
          "description": "Descriptions (long form) of each evaluation factor.",
          "items": {
            "type": "string"
          },
          "title": "Evaluation Factors Long",
          "type": "array"
        },
        "metrics": {
          "description": "(Abbreviations of) metrics used for evaluation.",
          "items": {
            "maxLength": 16,
            "type": "string"
          },
          "title": "Metrics",
          "type": "array"
        },
        "metrics_long": {
          "description": "Description of each metric used.",
          "items": {
            "type": "string"
          },
          "title": "Metrics Long",
          "type": "array"
        },
        "results": {
          "description": "Results for each metric (rows; outer list) and each evaluation factor (columns; inner list).",
          "items": {
            "items": {
              "anyOf": [
                {
                  "type": "string"
                },
                {
                  "type": "number"
                },
                {
                  "type": "integer"
                }
              ]
            },
            "type": "array"
          },
          "title": "Results",
          "type": "array"
        },
        "results_summary": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Interpretation of results for general audience.\n\nConsider:\n    - Overall model performance\n    - Comparison to existing methods\n    - Limitations and areas for improvement",
          "title": "Results Summary"
        }
      },
      "required": [
        "dataset_id",
        "dataset_source",
        "dataset_role",
        "sample_count",
        "evaluation_factors",
        "evaluation_factors_long",
        "metrics",
        "metrics_long",
        "results"
      ],
      "title": "model.v0_5.Evaluation",
      "type": "object"
    },
    "ReproducibilityTolerance": {
      "additionalProperties": true,
      "description": "Describes what small numerical differences -- if any -- may be tolerated\nin the generated output when executing in different environments.\n\nA tensor element *output* is considered mismatched to the **test_tensor** if\nabs(*output* - **test_tensor**) > **absolute_tolerance** + **relative_tolerance** * abs(**test_tensor**).\n(Internally we call [numpy.testing.assert_allclose](https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html).)\n\nMotivation:\n    For testing we can request the respective deep learning frameworks to be as\n    reproducible as possible by setting seeds and chosing deterministic algorithms,\n    but differences in operating systems, available hardware and installed drivers\n    may still lead to numerical differences.",
      "properties": {
        "relative_tolerance": {
          "default": 0.001,
          "description": "Maximum relative tolerance of reproduced test tensor.",
          "maximum": 0.01,
          "minimum": 0,
          "title": "Relative Tolerance",
          "type": "number"
        },
        "absolute_tolerance": {
          "default": 0.001,
          "description": "Maximum absolute tolerance of reproduced test tensor.",
          "minimum": 0,
          "title": "Absolute Tolerance",
          "type": "number"
        },
        "mismatched_elements_per_million": {
          "default": 100,
          "description": "Maximum number of mismatched elements/pixels per million to tolerate.",
          "maximum": 1000,
          "minimum": 0,
          "title": "Mismatched Elements Per Million",
          "type": "integer"
        },
        "output_ids": {
          "default": [],
          "description": "Limits the output tensor IDs these reproducibility details apply to.",
          "items": {
            "maxLength": 32,
            "minLength": 1,
            "title": "TensorId",
            "type": "string"
          },
          "title": "Output Ids",
          "type": "array"
        },
        "weights_formats": {
          "default": [],
          "description": "Limits the weights formats these details apply to.",
          "items": {
            "enum": [
              "keras_hdf5",
              "keras_v3",
              "onnx",
              "pytorch_state_dict",
              "tensorflow_js",
              "tensorflow_saved_model_bundle",
              "torchscript"
            ],
            "type": "string"
          },
          "title": "Weights Formats",
          "type": "array"
        }
      },
      "title": "model.v0_5.ReproducibilityTolerance",
      "type": "object"
    },
    "TrainingDetails": {
      "additionalProperties": true,
      "properties": {
        "training_preprocessing": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Detailed image preprocessing steps during model training:\n\nMention:\n- *Normalization methods*\n- *Augmentation strategies*\n- *Resizing/resampling procedures*\n- *Artifact handling*",
          "title": "Training Preprocessing"
        },
        "training_epochs": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Number of training epochs.",
          "title": "Training Epochs"
        },
        "training_batch_size": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Batch size used in training.",
          "title": "Training Batch Size"
        },
        "initial_learning_rate": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Initial learning rate used in training.",
          "title": "Initial Learning Rate"
        },
        "learning_rate_schedule": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Learning rate schedule used in training.",
          "title": "Learning Rate Schedule"
        },
        "loss_function": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Loss function used in training, e.g. nn.MSELoss.",
          "title": "Loss Function"
        },
        "loss_function_kwargs": {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "description": "key word arguments for the `loss_function`",
          "title": "Loss Function Kwargs",
          "type": "object"
        },
        "optimizer": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "optimizer, e.g. torch.optim.Adam",
          "title": "Optimizer"
        },
        "optimizer_kwargs": {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "description": "key word arguments for the `optimizer`",
          "title": "Optimizer Kwargs",
          "type": "object"
        },
        "regularization": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Regularization techniques used during training, e.g. drop-out or weight decay.",
          "title": "Regularization"
        },
        "training_duration": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Total training duration in hours.",
          "title": "Training Duration"
        }
      },
      "title": "model.v0_5.TrainingDetails",
      "type": "object"
    },
    "YamlValue": {
      "anyOf": [
        {
          "type": "boolean"
        },
        {
          "format": "date",
          "type": "string"
        },
        {
          "format": "date-time",
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        },
        {
          "type": "string"
        },
        {
          "items": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "array"
        },
        {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "object"
        },
        {
          "type": "null"
        }
      ]
    }
  },
  "additionalProperties": true,
  "properties": {
    "bioimageio": {
      "$ref": "#/$defs/BioimageioConfig"
    },
    "stardist": {
      "$ref": "#/$defs/YamlValue",
      "default": null
    }
  },
  "title": "model.v0_5.Config",
  "type": "object"
}

Fields:

bioimageio pydantic-field ¤

bioimageio: BioimageioConfig

stardist pydantic-field ¤

stardist: YamlValue = None

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

DataDependentSize pydantic-model ¤

Bases: Node

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "min": {
      "default": 1,
      "exclusiveMinimum": 0,
      "title": "Min",
      "type": "integer"
    },
    "max": {
      "anyOf": [
        {
          "exclusiveMinimum": 1,
          "type": "integer"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Max"
    }
  },
  "title": "model.v0_5.DataDependentSize",
  "type": "object"
}

Fields:

  • min (Annotated[int, Gt(0)])
  • max (Annotated[Optional[int], Gt(1)])

Validators:

  • _validate_max_gt_min

max pydantic-field ¤

max: Annotated[Optional[int], Gt(1)] = None

min pydantic-field ¤

min: Annotated[int, Gt(0)] = 1

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

validate_size ¤

validate_size(size: int, msg_prefix: str = '') -> int
Source code in src/bioimageio/spec/model/v0_5.py
354
355
356
357
358
359
360
361
def validate_size(self, size: int, msg_prefix: str = "") -> int:
    if size < self.min:
        raise ValueError(f"{msg_prefix}size {size} < {self.min}")

    if self.max is not None and size > self.max:
        raise ValueError(f"{msg_prefix}size {size} > {self.max}")

    return size

DatasetDescr pydantic-model ¤

Bases: GenericDescrBase

A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage processing.

Show JSON schema:
{
  "$defs": {
    "Author": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "title": "Name",
          "type": "string"
        },
        "github_user": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Github User"
        }
      },
      "required": [
        "name"
      ],
      "title": "generic.v0_3.Author",
      "type": "object"
    },
    "BadgeDescr": {
      "additionalProperties": false,
      "description": "A custom badge",
      "properties": {
        "label": {
          "description": "badge label to display on hover",
          "examples": [
            "Open in Colab"
          ],
          "title": "Label",
          "type": "string"
        },
        "icon": {
          "anyOf": [
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "badge icon (included in bioimage.io package if not a URL)",
          "examples": [
            "https://colab.research.google.com/assets/colab-badge.svg"
          ],
          "title": "Icon"
        },
        "url": {
          "description": "target URL",
          "examples": [
            "https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb"
          ],
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        }
      },
      "required": [
        "label",
        "url"
      ],
      "title": "generic.v0_2.BadgeDescr",
      "type": "object"
    },
    "BioimageioConfig": {
      "additionalProperties": true,
      "description": "bioimage.io internal metadata.",
      "properties": {},
      "title": "generic.v0_3.BioimageioConfig",
      "type": "object"
    },
    "CiteEntry": {
      "additionalProperties": false,
      "description": "A citation that should be referenced in work using this resource.",
      "properties": {
        "text": {
          "description": "free text description",
          "title": "Text",
          "type": "string"
        },
        "doi": {
          "anyOf": [
            {
              "description": "A digital object identifier, see https://www.doi.org/",
              "pattern": "^10\\.[0-9]{4}.+$",
              "title": "Doi",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details.\nNote:\n    Either **doi** or **url** have to be specified.",
          "title": "Doi"
        },
        "url": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "URL to cite (preferably specify a **doi** instead/also).\nNote:\n    Either **doi** or **url** have to be specified.",
          "title": "Url"
        }
      },
      "required": [
        "text"
      ],
      "title": "generic.v0_3.CiteEntry",
      "type": "object"
    },
    "Config": {
      "additionalProperties": true,
      "description": "A place to store additional metadata (often tool specific).\n\nSuch additional metadata is typically set programmatically by the respective tool\nor by people with specific insights into the tool.\nIf you want to store additional metadata that does not match any of the other\nfields, think of a key unlikely to collide with anyone elses use-case/tool and save\nit here.\n\nPlease consider creating [an issue in the bioimageio.spec repository](https://github.com/bioimage-io/spec-bioimage-io/issues/new?template=Blank+issue)\nif you are not sure if an existing field could cover your use case\nor if you think such a field should exist.",
      "properties": {
        "bioimageio": {
          "$ref": "#/$defs/BioimageioConfig"
        }
      },
      "title": "generic.v0_3.Config",
      "type": "object"
    },
    "FileDescr": {
      "additionalProperties": false,
      "description": "A file description",
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "File source",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        }
      },
      "required": [
        "source"
      ],
      "title": "_internal.io.FileDescr",
      "type": "object"
    },
    "Maintainer": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Name"
        },
        "github_user": {
          "title": "Github User",
          "type": "string"
        }
      },
      "required": [
        "github_user"
      ],
      "title": "generic.v0_3.Maintainer",
      "type": "object"
    },
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    },
    "Uploader": {
      "additionalProperties": false,
      "properties": {
        "email": {
          "description": "Email",
          "format": "email",
          "title": "Email",
          "type": "string"
        },
        "name": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "name",
          "title": "Name"
        }
      },
      "required": [
        "email"
      ],
      "title": "generic.v0_2.Uploader",
      "type": "object"
    },
    "Version": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        }
      ],
      "description": "wraps a packaging.version.Version instance for validation in pydantic models",
      "title": "Version"
    }
  },
  "additionalProperties": false,
  "description": "A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage\nprocessing.",
  "properties": {
    "name": {
      "description": "A human-friendly name of the resource description.\nMay only contains letters, digits, underscore, minus, parentheses and spaces.",
      "maxLength": 128,
      "minLength": 5,
      "title": "Name",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A string containing a brief description.",
      "maxLength": 1024,
      "title": "Description",
      "type": "string"
    },
    "covers": {
      "description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1 or 1:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg')",
      "examples": [
        [
          "cover.png"
        ]
      ],
      "items": {
        "anyOf": [
          {
            "description": "A URL with the HTTP or HTTPS scheme.",
            "format": "uri",
            "maxLength": 2083,
            "minLength": 1,
            "title": "HttpUrl",
            "type": "string"
          },
          {
            "$ref": "#/$defs/RelativeFilePath"
          },
          {
            "format": "file-path",
            "title": "FilePath",
            "type": "string"
          }
        ]
      },
      "title": "Covers",
      "type": "array"
    },
    "id_emoji": {
      "anyOf": [
        {
          "examples": [
            "\ud83e\udd88",
            "\ud83e\udda5"
          ],
          "maxLength": 2,
          "minLength": 1,
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "UTF-8 emoji for display alongside the `id`.",
      "title": "Id Emoji"
    },
    "authors": {
      "description": "The authors are the creators of this resource description and the primary points of contact.",
      "items": {
        "$ref": "#/$defs/Author"
      },
      "title": "Authors",
      "type": "array"
    },
    "attachments": {
      "description": "file attachments",
      "items": {
        "$ref": "#/$defs/FileDescr"
      },
      "title": "Attachments",
      "type": "array"
    },
    "cite": {
      "description": "citations",
      "items": {
        "$ref": "#/$defs/CiteEntry"
      },
      "title": "Cite",
      "type": "array"
    },
    "license": {
      "anyOf": [
        {
          "enum": [
            "0BSD",
            "3D-Slicer-1.0",
            "AAL",
            "Abstyles",
            "AdaCore-doc",
            "Adobe-2006",
            "Adobe-Display-PostScript",
            "Adobe-Glyph",
            "Adobe-Utopia",
            "ADSL",
            "AFL-1.1",
            "AFL-1.2",
            "AFL-2.0",
            "AFL-2.1",
            "AFL-3.0",
            "Afmparse",
            "AGPL-1.0-only",
            "AGPL-1.0-or-later",
            "AGPL-3.0-only",
            "AGPL-3.0-or-later",
            "Aladdin",
            "AMD-newlib",
            "AMDPLPA",
            "AML",
            "AML-glslang",
            "AMPAS",
            "ANTLR-PD",
            "ANTLR-PD-fallback",
            "any-OSI",
            "any-OSI-perl-modules",
            "Apache-1.0",
            "Apache-1.1",
            "Apache-2.0",
            "APAFML",
            "APL-1.0",
            "App-s2p",
            "APSL-1.0",
            "APSL-1.1",
            "APSL-1.2",
            "APSL-2.0",
            "Arphic-1999",
            "Artistic-1.0",
            "Artistic-1.0-cl8",
            "Artistic-1.0-Perl",
            "Artistic-2.0",
            "Artistic-dist",
            "Aspell-RU",
            "ASWF-Digital-Assets-1.0",
            "ASWF-Digital-Assets-1.1",
            "Baekmuk",
            "Bahyph",
            "Barr",
            "bcrypt-Solar-Designer",
            "Beerware",
            "Bitstream-Charter",
            "Bitstream-Vera",
            "BitTorrent-1.0",
            "BitTorrent-1.1",
            "blessing",
            "BlueOak-1.0.0",
            "Boehm-GC",
            "Boehm-GC-without-fee",
            "Borceux",
            "Brian-Gladman-2-Clause",
            "Brian-Gladman-3-Clause",
            "BSD-1-Clause",
            "BSD-2-Clause",
            "BSD-2-Clause-Darwin",
            "BSD-2-Clause-first-lines",
            "BSD-2-Clause-Patent",
            "BSD-2-Clause-pkgconf-disclaimer",
            "BSD-2-Clause-Views",
            "BSD-3-Clause",
            "BSD-3-Clause-acpica",
            "BSD-3-Clause-Attribution",
            "BSD-3-Clause-Clear",
            "BSD-3-Clause-flex",
            "BSD-3-Clause-HP",
            "BSD-3-Clause-LBNL",
            "BSD-3-Clause-Modification",
            "BSD-3-Clause-No-Military-License",
            "BSD-3-Clause-No-Nuclear-License",
            "BSD-3-Clause-No-Nuclear-License-2014",
            "BSD-3-Clause-No-Nuclear-Warranty",
            "BSD-3-Clause-Open-MPI",
            "BSD-3-Clause-Sun",
            "BSD-4-Clause",
            "BSD-4-Clause-Shortened",
            "BSD-4-Clause-UC",
            "BSD-4.3RENO",
            "BSD-4.3TAHOE",
            "BSD-Advertising-Acknowledgement",
            "BSD-Attribution-HPND-disclaimer",
            "BSD-Inferno-Nettverk",
            "BSD-Protection",
            "BSD-Source-beginning-file",
            "BSD-Source-Code",
            "BSD-Systemics",
            "BSD-Systemics-W3Works",
            "BSL-1.0",
            "BUSL-1.1",
            "bzip2-1.0.6",
            "C-UDA-1.0",
            "CAL-1.0",
            "CAL-1.0-Combined-Work-Exception",
            "Caldera",
            "Caldera-no-preamble",
            "Catharon",
            "CATOSL-1.1",
            "CC-BY-1.0",
            "CC-BY-2.0",
            "CC-BY-2.5",
            "CC-BY-2.5-AU",
            "CC-BY-3.0",
            "CC-BY-3.0-AT",
            "CC-BY-3.0-AU",
            "CC-BY-3.0-DE",
            "CC-BY-3.0-IGO",
            "CC-BY-3.0-NL",
            "CC-BY-3.0-US",
            "CC-BY-4.0",
            "CC-BY-NC-1.0",
            "CC-BY-NC-2.0",
            "CC-BY-NC-2.5",
            "CC-BY-NC-3.0",
            "CC-BY-NC-3.0-DE",
            "CC-BY-NC-4.0",
            "CC-BY-NC-ND-1.0",
            "CC-BY-NC-ND-2.0",
            "CC-BY-NC-ND-2.5",
            "CC-BY-NC-ND-3.0",
            "CC-BY-NC-ND-3.0-DE",
            "CC-BY-NC-ND-3.0-IGO",
            "CC-BY-NC-ND-4.0",
            "CC-BY-NC-SA-1.0",
            "CC-BY-NC-SA-2.0",
            "CC-BY-NC-SA-2.0-DE",
            "CC-BY-NC-SA-2.0-FR",
            "CC-BY-NC-SA-2.0-UK",
            "CC-BY-NC-SA-2.5",
            "CC-BY-NC-SA-3.0",
            "CC-BY-NC-SA-3.0-DE",
            "CC-BY-NC-SA-3.0-IGO",
            "CC-BY-NC-SA-4.0",
            "CC-BY-ND-1.0",
            "CC-BY-ND-2.0",
            "CC-BY-ND-2.5",
            "CC-BY-ND-3.0",
            "CC-BY-ND-3.0-DE",
            "CC-BY-ND-4.0",
            "CC-BY-SA-1.0",
            "CC-BY-SA-2.0",
            "CC-BY-SA-2.0-UK",
            "CC-BY-SA-2.1-JP",
            "CC-BY-SA-2.5",
            "CC-BY-SA-3.0",
            "CC-BY-SA-3.0-AT",
            "CC-BY-SA-3.0-DE",
            "CC-BY-SA-3.0-IGO",
            "CC-BY-SA-4.0",
            "CC-PDDC",
            "CC-PDM-1.0",
            "CC-SA-1.0",
            "CC0-1.0",
            "CDDL-1.0",
            "CDDL-1.1",
            "CDL-1.0",
            "CDLA-Permissive-1.0",
            "CDLA-Permissive-2.0",
            "CDLA-Sharing-1.0",
            "CECILL-1.0",
            "CECILL-1.1",
            "CECILL-2.0",
            "CECILL-2.1",
            "CECILL-B",
            "CECILL-C",
            "CERN-OHL-1.1",
            "CERN-OHL-1.2",
            "CERN-OHL-P-2.0",
            "CERN-OHL-S-2.0",
            "CERN-OHL-W-2.0",
            "CFITSIO",
            "check-cvs",
            "checkmk",
            "ClArtistic",
            "Clips",
            "CMU-Mach",
            "CMU-Mach-nodoc",
            "CNRI-Jython",
            "CNRI-Python",
            "CNRI-Python-GPL-Compatible",
            "COIL-1.0",
            "Community-Spec-1.0",
            "Condor-1.1",
            "copyleft-next-0.3.0",
            "copyleft-next-0.3.1",
            "Cornell-Lossless-JPEG",
            "CPAL-1.0",
            "CPL-1.0",
            "CPOL-1.02",
            "Cronyx",
            "Crossword",
            "CryptoSwift",
            "CrystalStacker",
            "CUA-OPL-1.0",
            "Cube",
            "curl",
            "cve-tou",
            "D-FSL-1.0",
            "DEC-3-Clause",
            "diffmark",
            "DL-DE-BY-2.0",
            "DL-DE-ZERO-2.0",
            "DOC",
            "DocBook-DTD",
            "DocBook-Schema",
            "DocBook-Stylesheet",
            "DocBook-XML",
            "Dotseqn",
            "DRL-1.0",
            "DRL-1.1",
            "DSDP",
            "dtoa",
            "dvipdfm",
            "ECL-1.0",
            "ECL-2.0",
            "EFL-1.0",
            "EFL-2.0",
            "eGenix",
            "Elastic-2.0",
            "Entessa",
            "EPICS",
            "EPL-1.0",
            "EPL-2.0",
            "ErlPL-1.1",
            "etalab-2.0",
            "EUDatagrid",
            "EUPL-1.0",
            "EUPL-1.1",
            "EUPL-1.2",
            "Eurosym",
            "Fair",
            "FBM",
            "FDK-AAC",
            "Ferguson-Twofish",
            "Frameworx-1.0",
            "FreeBSD-DOC",
            "FreeImage",
            "FSFAP",
            "FSFAP-no-warranty-disclaimer",
            "FSFUL",
            "FSFULLR",
            "FSFULLRSD",
            "FSFULLRWD",
            "FSL-1.1-ALv2",
            "FSL-1.1-MIT",
            "FTL",
            "Furuseth",
            "fwlw",
            "Game-Programming-Gems",
            "GCR-docs",
            "GD",
            "generic-xts",
            "GFDL-1.1-invariants-only",
            "GFDL-1.1-invariants-or-later",
            "GFDL-1.1-no-invariants-only",
            "GFDL-1.1-no-invariants-or-later",
            "GFDL-1.1-only",
            "GFDL-1.1-or-later",
            "GFDL-1.2-invariants-only",
            "GFDL-1.2-invariants-or-later",
            "GFDL-1.2-no-invariants-only",
            "GFDL-1.2-no-invariants-or-later",
            "GFDL-1.2-only",
            "GFDL-1.2-or-later",
            "GFDL-1.3-invariants-only",
            "GFDL-1.3-invariants-or-later",
            "GFDL-1.3-no-invariants-only",
            "GFDL-1.3-no-invariants-or-later",
            "GFDL-1.3-only",
            "GFDL-1.3-or-later",
            "Giftware",
            "GL2PS",
            "Glide",
            "Glulxe",
            "GLWTPL",
            "gnuplot",
            "GPL-1.0-only",
            "GPL-1.0-or-later",
            "GPL-2.0-only",
            "GPL-2.0-or-later",
            "GPL-3.0-only",
            "GPL-3.0-or-later",
            "Graphics-Gems",
            "gSOAP-1.3b",
            "gtkbook",
            "Gutmann",
            "HaskellReport",
            "HDF5",
            "hdparm",
            "HIDAPI",
            "Hippocratic-2.1",
            "HP-1986",
            "HP-1989",
            "HPND",
            "HPND-DEC",
            "HPND-doc",
            "HPND-doc-sell",
            "HPND-export-US",
            "HPND-export-US-acknowledgement",
            "HPND-export-US-modify",
            "HPND-export2-US",
            "HPND-Fenneberg-Livingston",
            "HPND-INRIA-IMAG",
            "HPND-Intel",
            "HPND-Kevlin-Henney",
            "HPND-Markus-Kuhn",
            "HPND-merchantability-variant",
            "HPND-MIT-disclaimer",
            "HPND-Netrek",
            "HPND-Pbmplus",
            "HPND-sell-MIT-disclaimer-xserver",
            "HPND-sell-regexpr",
            "HPND-sell-variant",
            "HPND-sell-variant-MIT-disclaimer",
            "HPND-sell-variant-MIT-disclaimer-rev",
            "HPND-UC",
            "HPND-UC-export-US",
            "HTMLTIDY",
            "IBM-pibs",
            "ICU",
            "IEC-Code-Components-EULA",
            "IJG",
            "IJG-short",
            "ImageMagick",
            "iMatix",
            "Imlib2",
            "Info-ZIP",
            "Inner-Net-2.0",
            "InnoSetup",
            "Intel",
            "Intel-ACPI",
            "Interbase-1.0",
            "IPA",
            "IPL-1.0",
            "ISC",
            "ISC-Veillard",
            "Jam",
            "JasPer-2.0",
            "jove",
            "JPL-image",
            "JPNIC",
            "JSON",
            "Kastrup",
            "Kazlib",
            "Knuth-CTAN",
            "LAL-1.2",
            "LAL-1.3",
            "Latex2e",
            "Latex2e-translated-notice",
            "Leptonica",
            "LGPL-2.0-only",
            "LGPL-2.0-or-later",
            "LGPL-2.1-only",
            "LGPL-2.1-or-later",
            "LGPL-3.0-only",
            "LGPL-3.0-or-later",
            "LGPLLR",
            "Libpng",
            "libpng-1.6.35",
            "libpng-2.0",
            "libselinux-1.0",
            "libtiff",
            "libutil-David-Nugent",
            "LiLiQ-P-1.1",
            "LiLiQ-R-1.1",
            "LiLiQ-Rplus-1.1",
            "Linux-man-pages-1-para",
            "Linux-man-pages-copyleft",
            "Linux-man-pages-copyleft-2-para",
            "Linux-man-pages-copyleft-var",
            "Linux-OpenIB",
            "LOOP",
            "LPD-document",
            "LPL-1.0",
            "LPL-1.02",
            "LPPL-1.0",
            "LPPL-1.1",
            "LPPL-1.2",
            "LPPL-1.3a",
            "LPPL-1.3c",
            "lsof",
            "Lucida-Bitmap-Fonts",
            "LZMA-SDK-9.11-to-9.20",
            "LZMA-SDK-9.22",
            "Mackerras-3-Clause",
            "Mackerras-3-Clause-acknowledgment",
            "magaz",
            "mailprio",
            "MakeIndex",
            "man2html",
            "Martin-Birgmeier",
            "McPhee-slideshow",
            "metamail",
            "Minpack",
            "MIPS",
            "MirOS",
            "MIT",
            "MIT-0",
            "MIT-advertising",
            "MIT-Click",
            "MIT-CMU",
            "MIT-enna",
            "MIT-feh",
            "MIT-Festival",
            "MIT-Khronos-old",
            "MIT-Modern-Variant",
            "MIT-open-group",
            "MIT-testregex",
            "MIT-Wu",
            "MITNFA",
            "MMIXware",
            "Motosoto",
            "MPEG-SSG",
            "mpi-permissive",
            "mpich2",
            "MPL-1.0",
            "MPL-1.1",
            "MPL-2.0",
            "MPL-2.0-no-copyleft-exception",
            "mplus",
            "MS-LPL",
            "MS-PL",
            "MS-RL",
            "MTLL",
            "MulanPSL-1.0",
            "MulanPSL-2.0",
            "Multics",
            "Mup",
            "NAIST-2003",
            "NASA-1.3",
            "Naumen",
            "NBPL-1.0",
            "NCBI-PD",
            "NCGL-UK-2.0",
            "NCL",
            "NCSA",
            "NetCDF",
            "Newsletr",
            "NGPL",
            "ngrep",
            "NICTA-1.0",
            "NIST-PD",
            "NIST-PD-fallback",
            "NIST-Software",
            "NLOD-1.0",
            "NLOD-2.0",
            "NLPL",
            "Nokia",
            "NOSL",
            "Noweb",
            "NPL-1.0",
            "NPL-1.1",
            "NPOSL-3.0",
            "NRL",
            "NTIA-PD",
            "NTP",
            "NTP-0",
            "O-UDA-1.0",
            "OAR",
            "OCCT-PL",
            "OCLC-2.0",
            "ODbL-1.0",
            "ODC-By-1.0",
            "OFFIS",
            "OFL-1.0",
            "OFL-1.0-no-RFN",
            "OFL-1.0-RFN",
            "OFL-1.1",
            "OFL-1.1-no-RFN",
            "OFL-1.1-RFN",
            "OGC-1.0",
            "OGDL-Taiwan-1.0",
            "OGL-Canada-2.0",
            "OGL-UK-1.0",
            "OGL-UK-2.0",
            "OGL-UK-3.0",
            "OGTSL",
            "OLDAP-1.1",
            "OLDAP-1.2",
            "OLDAP-1.3",
            "OLDAP-1.4",
            "OLDAP-2.0",
            "OLDAP-2.0.1",
            "OLDAP-2.1",
            "OLDAP-2.2",
            "OLDAP-2.2.1",
            "OLDAP-2.2.2",
            "OLDAP-2.3",
            "OLDAP-2.4",
            "OLDAP-2.5",
            "OLDAP-2.6",
            "OLDAP-2.7",
            "OLDAP-2.8",
            "OLFL-1.3",
            "OML",
            "OpenPBS-2.3",
            "OpenSSL",
            "OpenSSL-standalone",
            "OpenVision",
            "OPL-1.0",
            "OPL-UK-3.0",
            "OPUBL-1.0",
            "OSET-PL-2.1",
            "OSL-1.0",
            "OSL-1.1",
            "OSL-2.0",
            "OSL-2.1",
            "OSL-3.0",
            "PADL",
            "Parity-6.0.0",
            "Parity-7.0.0",
            "PDDL-1.0",
            "PHP-3.0",
            "PHP-3.01",
            "Pixar",
            "pkgconf",
            "Plexus",
            "pnmstitch",
            "PolyForm-Noncommercial-1.0.0",
            "PolyForm-Small-Business-1.0.0",
            "PostgreSQL",
            "PPL",
            "PSF-2.0",
            "psfrag",
            "psutils",
            "Python-2.0",
            "Python-2.0.1",
            "python-ldap",
            "Qhull",
            "QPL-1.0",
            "QPL-1.0-INRIA-2004",
            "radvd",
            "Rdisc",
            "RHeCos-1.1",
            "RPL-1.1",
            "RPL-1.5",
            "RPSL-1.0",
            "RSA-MD",
            "RSCPL",
            "Ruby",
            "Ruby-pty",
            "SAX-PD",
            "SAX-PD-2.0",
            "Saxpath",
            "SCEA",
            "SchemeReport",
            "Sendmail",
            "Sendmail-8.23",
            "Sendmail-Open-Source-1.1",
            "SGI-B-1.0",
            "SGI-B-1.1",
            "SGI-B-2.0",
            "SGI-OpenGL",
            "SGP4",
            "SHL-0.5",
            "SHL-0.51",
            "SimPL-2.0",
            "SISSL",
            "SISSL-1.2",
            "SL",
            "Sleepycat",
            "SMAIL-GPL",
            "SMLNJ",
            "SMPPL",
            "SNIA",
            "snprintf",
            "SOFA",
            "softSurfer",
            "Soundex",
            "Spencer-86",
            "Spencer-94",
            "Spencer-99",
            "SPL-1.0",
            "ssh-keyscan",
            "SSH-OpenSSH",
            "SSH-short",
            "SSLeay-standalone",
            "SSPL-1.0",
            "SugarCRM-1.1.3",
            "SUL-1.0",
            "Sun-PPP",
            "Sun-PPP-2000",
            "SunPro",
            "SWL",
            "swrule",
            "Symlinks",
            "TAPR-OHL-1.0",
            "TCL",
            "TCP-wrappers",
            "TermReadKey",
            "TGPPL-1.0",
            "ThirdEye",
            "threeparttable",
            "TMate",
            "TORQUE-1.1",
            "TOSL",
            "TPDL",
            "TPL-1.0",
            "TrustedQSL",
            "TTWL",
            "TTYP0",
            "TU-Berlin-1.0",
            "TU-Berlin-2.0",
            "Ubuntu-font-1.0",
            "UCAR",
            "UCL-1.0",
            "ulem",
            "UMich-Merit",
            "Unicode-3.0",
            "Unicode-DFS-2015",
            "Unicode-DFS-2016",
            "Unicode-TOU",
            "UnixCrypt",
            "Unlicense",
            "Unlicense-libtelnet",
            "Unlicense-libwhirlpool",
            "UPL-1.0",
            "URT-RLE",
            "Vim",
            "VOSTROM",
            "VSL-1.0",
            "W3C",
            "W3C-19980720",
            "W3C-20150513",
            "w3m",
            "Watcom-1.0",
            "Widget-Workshop",
            "Wsuipa",
            "WTFPL",
            "wwl",
            "X11",
            "X11-distribute-modifications-variant",
            "X11-swapped",
            "Xdebug-1.03",
            "Xerox",
            "Xfig",
            "XFree86-1.1",
            "xinetd",
            "xkeyboard-config-Zinoviev",
            "xlock",
            "Xnet",
            "xpp",
            "XSkat",
            "xzoom",
            "YPL-1.0",
            "YPL-1.1",
            "Zed",
            "Zeeff",
            "Zend-2.0",
            "Zimbra-1.3",
            "Zimbra-1.4",
            "Zlib",
            "zlib-acknowledgement",
            "ZPL-1.1",
            "ZPL-2.0",
            "ZPL-2.1"
          ],
          "title": "LicenseId",
          "type": "string"
        },
        {
          "enum": [
            "AGPL-1.0",
            "AGPL-3.0",
            "BSD-2-Clause-FreeBSD",
            "BSD-2-Clause-NetBSD",
            "bzip2-1.0.5",
            "eCos-2.0",
            "GFDL-1.1",
            "GFDL-1.2",
            "GFDL-1.3",
            "GPL-1.0",
            "GPL-1.0+",
            "GPL-2.0",
            "GPL-2.0+",
            "GPL-2.0-with-autoconf-exception",
            "GPL-2.0-with-bison-exception",
            "GPL-2.0-with-classpath-exception",
            "GPL-2.0-with-font-exception",
            "GPL-2.0-with-GCC-exception",
            "GPL-3.0",
            "GPL-3.0+",
            "GPL-3.0-with-autoconf-exception",
            "GPL-3.0-with-GCC-exception",
            "LGPL-2.0",
            "LGPL-2.0+",
            "LGPL-2.1",
            "LGPL-2.1+",
            "LGPL-3.0",
            "LGPL-3.0+",
            "Net-SNMP",
            "Nunit",
            "StandardML-NJ",
            "wxWindows"
          ],
          "title": "DeprecatedLicenseId",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose)\nto discuss your intentions with the community.",
      "examples": [
        "CC0-1.0",
        "MIT",
        "BSD-2-Clause"
      ],
      "title": "License"
    },
    "git_repo": {
      "anyOf": [
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "A URL to the Git repository where the resource is being developed.",
      "examples": [
        "https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
      ],
      "title": "Git Repo"
    },
    "icon": {
      "anyOf": [
        {
          "maxLength": 2,
          "minLength": 1,
          "type": "string"
        },
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "$ref": "#/$defs/RelativeFilePath"
        },
        {
          "format": "file-path",
          "title": "FilePath",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "An icon for illustration, e.g. on bioimage.io",
      "title": "Icon"
    },
    "links": {
      "description": "IDs of other bioimage.io resources",
      "examples": [
        [
          "ilastik/ilastik",
          "deepimagej/deepimagej",
          "zero/notebook_u-net_3d_zerocostdl4mic"
        ]
      ],
      "items": {
        "type": "string"
      },
      "title": "Links",
      "type": "array"
    },
    "uploader": {
      "anyOf": [
        {
          "$ref": "#/$defs/Uploader"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The person who uploaded the model (e.g. to bioimage.io)"
    },
    "maintainers": {
      "description": "Maintainers of this resource.\nIf not specified, `authors` are maintainers and at least some of them has to specify their `github_user` name",
      "items": {
        "$ref": "#/$defs/Maintainer"
      },
      "title": "Maintainers",
      "type": "array"
    },
    "tags": {
      "description": "Associated tags",
      "examples": [
        [
          "unet2d",
          "pytorch",
          "nucleus",
          "segmentation",
          "dsb2018"
        ]
      ],
      "items": {
        "type": "string"
      },
      "title": "Tags",
      "type": "array"
    },
    "version": {
      "anyOf": [
        {
          "$ref": "#/$defs/Version"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The version of the resource following SemVer 2.0."
    },
    "version_comment": {
      "anyOf": [
        {
          "maxLength": 512,
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "A comment on the version of the resource.",
      "title": "Version Comment"
    },
    "format_version": {
      "const": "0.3.0",
      "description": "The **format** version of this resource specification",
      "title": "Format Version",
      "type": "string"
    },
    "documentation": {
      "anyOf": [
        {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "examples": [
            "https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
            "README.md"
          ]
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "URL or relative path to a markdown file encoded in UTF-8 with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.",
      "title": "Documentation"
    },
    "badges": {
      "description": "badges associated with this resource",
      "items": {
        "$ref": "#/$defs/BadgeDescr"
      },
      "title": "Badges",
      "type": "array"
    },
    "config": {
      "$ref": "#/$defs/Config",
      "description": "A field for custom configuration that can contain any keys not present in the RDF spec.\nThis means you should not store, for example, a GitHub repo URL in `config` since there is a `git_repo` field.\nKeys in `config` may be very specific to a tool or consumer software. To avoid conflicting definitions,\nit is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,\nfor example:\n```yaml\nconfig:\n    giraffe_neckometer:  # here is the domain name\n        length: 3837283\n        address:\n            home: zoo\n    imagej:              # config specific to ImageJ\n        macro_dir: path/to/macro/file\n```\nIf possible, please use [`snake_case`](https://en.wikipedia.org/wiki/Snake_case) for keys in `config`.\nYou may want to list linked files additionally under `attachments` to include them when packaging a resource.\n(Packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains\nan altered rdf.yaml file with local references to the downloaded files.)"
    },
    "type": {
      "const": "dataset",
      "title": "Type",
      "type": "string"
    },
    "id": {
      "anyOf": [
        {
          "minLength": 1,
          "title": "DatasetId",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
      "title": "Id"
    },
    "parent": {
      "anyOf": [
        {
          "minLength": 1,
          "title": "DatasetId",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The description from which this one is derived",
      "title": "Parent"
    },
    "source": {
      "anyOf": [
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "\"URL to the source of the dataset.",
      "title": "Source"
    }
  },
  "required": [
    "name",
    "format_version",
    "type"
  ],
  "title": "dataset 0.3.0",
  "type": "object"
}

Fields:

Validators:

attachments pydantic-field ¤

attachments: List[FileDescr_]

file attachments

authors pydantic-field ¤

authors: FAIR[List[Author]]

The authors are the creators of this resource description and the primary points of contact.

badges pydantic-field ¤

badges: List[BadgeDescr]

badges associated with this resource

cite pydantic-field ¤

cite: FAIR[List[CiteEntry]]

citations

config pydantic-field ¤

config: Config

A field for custom configuration that can contain any keys not present in the RDF spec. This means you should not store, for example, a GitHub repo URL in config since there is a git_repo field. Keys in config may be very specific to a tool or consumer software. To avoid conflicting definitions, it is recommended to wrap added configuration into a sub-field named with the specific domain or tool name, for example:

config:
    giraffe_neckometer:  # here is the domain name
        length: 3837283
        address:
            home: zoo
    imagej:              # config specific to ImageJ
        macro_dir: path/to/macro/file
If possible, please use snake_case for keys in config. You may want to list linked files additionally under attachments to include them when packaging a resource. (Packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains an altered rdf.yaml file with local references to the downloaded files.)

covers pydantic-field ¤

covers: List[FileSource_cover]

Cover images.

description pydantic-field ¤

description: FAIR[
    Annotated[
        str,
        MaxLen(1024),
        warn(
            MaxLen(512),
            "Description longer than 512 characters.",
        ),
    ]
] = ""

A string containing a brief description.

documentation pydantic-field ¤

documentation: FAIR[Optional[FileSource_documentation]] = (
    None
)

URL or relative path to a markdown file encoded in UTF-8 with additional documentation. The recommended documentation file name is README.md. An .md suffix is mandatory.

file_name property ¤

file_name: Optional[FileName]

File name of the bioimageio.yaml file the description was loaded from.

format_version pydantic-field ¤

format_version: Literal['0.3.0'] = '0.3.0'

git_repo pydantic-field ¤

git_repo: Annotated[
    Optional[HttpUrl],
    Field(
        examples=[
            "https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
        ]
    ),
] = None

A URL to the Git repository where the resource is being developed.

icon pydantic-field ¤

icon: Union[
    Annotated[str, Len(min_length=1, max_length=2)],
    FileSource_,
    None,
] = None

An icon for illustration, e.g. on bioimage.io

id pydantic-field ¤

id: Optional[DatasetId] = None

bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.

id_emoji pydantic-field ¤

id_emoji: Optional[
    Annotated[
        str,
        Len(min_length=1, max_length=2),
        Field(examples=["🦈", "🦥"]),
    ]
] = None

UTF-8 emoji for display alongside the id.

implemented_format_version class-attribute ¤

implemented_format_version: Literal['0.3.0'] = '0.3.0'

implemented_format_version_tuple class-attribute ¤

implemented_format_version_tuple: Tuple[int, int, int]

implemented_type class-attribute ¤

implemented_type: Literal['dataset'] = 'dataset'

license pydantic-field ¤

license: FAIR[
    Annotated[
        Annotated[
            Union[LicenseId, DeprecatedLicenseId, None],
            Field(union_mode="left_to_right"),
        ],
        warn(
            Optional[LicenseId],
            "{value} is deprecated, see https://spdx.org/licenses/{value}.html",
        ),
        Field(examples=["CC0-1.0", "MIT", "BSD-2-Clause"]),
    ]
] = None

A SPDX license identifier. We do not support custom license beyond the SPDX license list, if you need that please open a GitHub issue to discuss your intentions with the community.

links: Annotated[
    List[str],
    Field(
        examples=[
            (
                "ilastik/ilastik",
                "deepimagej/deepimagej",
                "zero/notebook_u-net_3d_zerocostdl4mic",
            )
        ]
    ),
]

IDs of other bioimage.io resources

maintainers pydantic-field ¤

maintainers: List[Maintainer]

Maintainers of this resource. If not specified, authors are maintainers and at least some of them has to specify their github_user name

name pydantic-field ¤

name: Annotated[
    Annotated[
        str,
        RestrictCharacters(
            string.ascii_letters + string.digits + "_+- ()"
        ),
    ],
    MinLen(5),
    MaxLen(128),
    warn(
        MaxLen(64), "Name longer than 64 characters.", INFO
    ),
]

A human-friendly name of the resource description. May only contains letters, digits, underscore, minus, parentheses and spaces.

parent pydantic-field ¤

parent: Optional[DatasetId] = None

The description from which this one is derived

root property ¤

root: Union[RootHttpUrl, DirectoryPath, ZipFile]

The URL/Path prefix to resolve any relative paths with.

source pydantic-field ¤

source: FAIR[Optional[HttpUrl]] = None

"URL to the source of the dataset.

tags pydantic-field ¤

tags: FAIR[
    Annotated[
        List[str],
        Field(
            examples=[
                (
                    "unet2d",
                    "pytorch",
                    "nucleus",
                    "segmentation",
                    "dsb2018",
                )
            ]
        ),
    ]
]

Associated tags

type pydantic-field ¤

type: Literal['dataset'] = 'dataset'

uploader pydantic-field ¤

uploader: Optional[Uploader] = None

The person who uploaded the model (e.g. to bioimage.io)

validation_summary property ¤

validation_summary: ValidationSummary

version pydantic-field ¤

version: Optional[Version] = None

The version of the resource following SemVer 2.0.

version_comment pydantic-field ¤

version_comment: Optional[Annotated[str, MaxLen(512)]] = (
    None
)

A comment on the version of the resource.

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any)
Source code in src/bioimageio/spec/_internal/common_nodes.py
207
208
209
210
211
212
213
214
215
216
217
218
219
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any):
    super().__pydantic_init_subclass__(**kwargs)
    # set classvar implemented_format_version_tuple
    if "format_version" in cls.model_fields:
        if "." not in cls.implemented_format_version:
            cls.implemented_format_version_tuple = (0, 0, 0)
        else:
            fv_tuple = get_format_version_tuple(cls.implemented_format_version)
            assert fv_tuple is not None, (
                f"failed to cast '{cls.implemented_format_version}' to tuple"
            )
            cls.implemented_format_version_tuple = fv_tuple

convert_from_old_format_wo_validation classmethod ¤

convert_from_old_format_wo_validation(
    data: BioimageioYamlContent,
) -> None

Convert metadata following an older format version to this classes' format without validating the result.

Source code in src/bioimageio/spec/generic/v0_3.py
449
450
451
452
453
454
@classmethod
def convert_from_old_format_wo_validation(cls, data: BioimageioYamlContent) -> None:
    """Convert metadata following an older format version to this classes' format
    without validating the result.
    """
    convert_from_older_format(data)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get_package_content ¤

get_package_content() -> Dict[
    FileName, Union[FileDescr, BioimageioYamlContent]
]

Returns package content without creating the package.

Source code in src/bioimageio/spec/_internal/common_nodes.py
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
def get_package_content(
    self,
) -> Dict[FileName, Union[FileDescr, BioimageioYamlContent]]:
    """Returns package content without creating the package."""
    content: Dict[FileName, FileDescr] = {}
    with PackagingContext(
        bioimageio_yaml_file_name=BIOIMAGEIO_YAML,
        file_sources=content,
    ):
        rdf_content: BioimageioYamlContent = self.model_dump(
            mode="json", exclude_unset=True
        )

    _ = rdf_content.pop("rdf_source", None)

    return {**content, BIOIMAGEIO_YAML: rdf_content}

load classmethod ¤

load(
    data: IncompleteDescrView,
    context: Optional[ValidationContext] = None,
) -> Union[Self, InvalidDescr]

factory method to create a resource description object

Source code in src/bioimageio/spec/_internal/common_nodes.py
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
@classmethod
def load(
    cls,
    data: IncompleteDescrView,
    context: Optional[ValidationContext] = None,
) -> Union[Self, InvalidDescr]:
    """factory method to create a resource description object"""

    context = context or get_validation_context()
    if context.perform_io_checks:
        file_descrs = extract_file_descrs(data)
        populate_cache(file_descrs)  # TODO: add progress bar

    with context.replace(log_warnings=context.warning_level <= INFO):
        rd, errors, val_warnings = cls._load_impl(deepcopy_incomplete_descr(data))

    if context.warning_level > INFO:
        all_warnings_context = context.replace(
            warning_level=INFO, log_warnings=False, raise_errors=False
        )
        # raise all validation warnings by reloading
        with all_warnings_context:
            _, _, val_warnings = cls._load_impl(deepcopy_incomplete_descr(data))

    format_status = "failed" if errors else "passed"
    rd.validation_summary.add_detail(
        ValidationDetail(
            errors=errors,
            name=(
                "bioimageio.spec format validation"
                f" {rd.type} {cls.implemented_format_version}"
            ),
            status=format_status,
            warnings=val_warnings,
        ),
        update_status=False,  # this special validation detail needs manual format updating below
    )
    assert format_status != "failed" or isinstance(rd, InvalidDescr)

    return rd

load_from_kwargs classmethod ¤

load_from_kwargs(
    context: Optional[ValidationContext] = None,
    *args: P.args,
    **kwargs: P.kwargs,
) -> Union[T, InvalidDescr]
Source code in src/bioimageio/spec/_internal/common_nodes.py
221
222
223
224
225
226
227
228
229
230
@classmethod
def load_from_kwargs(
    cls: Callable[P, T],
    context: Optional[ValidationContext] = None,
    *args: P.args,
    **kwargs: P.kwargs,
) -> Union[T, InvalidDescr]:
    sig = signature(cls)
    bound = sig.bind_partial(*args, **kwargs)
    return cls.load(dict(bound.arguments), context=context)  # pyright: ignore[reportFunctionMemberAccess]

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

package ¤

package(
    dest: Optional[
        Union[ZipFile, IO[bytes], Path, str]
    ] = None,
) -> ZipFile

package the described resource as a zip archive

Parameters:

Name Type Description Default

dest ¤

Optional[Union[ZipFile, IO[bytes], Path, str]]

(path/bytes stream of) destination zipfile

None
Source code in src/bioimageio/spec/_internal/common_nodes.py
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
def package(
    self, dest: Optional[Union[ZipFile, IO[bytes], Path, str]] = None, /
) -> ZipFile:
    """package the described resource as a zip archive

    Args:
        dest: (path/bytes stream of) destination zipfile
    """
    if dest is None:
        dest = BytesIO()

    if isinstance(dest, ZipFile):
        zip = dest
        if "r" in zip.mode:
            raise ValueError(
                f"zip file {dest} opened in '{zip.mode}' mode,"
                + " but write access is needed for packaging."
            )
    else:
        zip = ZipFile(dest, mode="w")

    if zip.filename is None:
        zip.filename = (
            str(getattr(self, "id", getattr(self, "name", "bioimageio"))) + ".zip"
        )

    content = self.get_package_content()
    write_content_to_zip(content, zip)
    return zip

warn_about_tag_categories pydantic-validator ¤

warn_about_tag_categories(
    value: List[str], info: ValidationInfo
) -> List[str]
Source code in src/bioimageio/spec/generic/v0_3.py
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
@as_warning
@field_validator("tags")
@classmethod
def warn_about_tag_categories(
    cls, value: List[str], info: ValidationInfo
) -> List[str]:
    categories = TAG_CATEGORIES.get(info.data["type"], {})
    missing_categories: List[Dict[str, Sequence[str]]] = []
    for cat, entries in categories.items():
        if not any(e in value for e in entries):
            missing_categories.append({cat: entries})

    if missing_categories:
        raise ValueError(
            f"Missing tags from bioimage.io categories: {missing_categories}"
        )

    return value

DatasetId ¤

Bases: ResourceId


              flowchart TD
              bioimageio.spec.model.v0_5.DatasetId[DatasetId]
              bioimageio.spec.generic.v0_3.ResourceId[ResourceId]
              bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]

                              bioimageio.spec.generic.v0_3.ResourceId --> bioimageio.spec.model.v0_5.DatasetId
                                bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.generic.v0_3.ResourceId
                



              click bioimageio.spec.model.v0_5.DatasetId href "" "bioimageio.spec.model.v0_5.DatasetId"
              click bioimageio.spec.generic.v0_3.ResourceId href "" "bioimageio.spec.generic.v0_3.ResourceId"
              click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
            

Methods:

Name Description
__get_pydantic_core_schema__
__get_pydantic_json_schema__
__new__

Attributes:

Name Type Description
root_model Type[RootModel[Any]]

the pydantic root model to validate the string

root_model class-attribute ¤

root_model: Type[RootModel[Any]] = RootModel[
    Annotated[
        NotEmpty[str],
        RestrictCharacters(
            string.ascii_lowercase + string.digits + "_-/."
        ),
        annotated_types.Predicate(
            lambda s: (
                not (s.startswith("/") or s.endswith("/"))
            )
        ),
    ]
]

the pydantic root model to validate the string

__get_pydantic_core_schema__ classmethod ¤

__get_pydantic_core_schema__(
    source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29
30
31
32
33
@classmethod
def __get_pydantic_core_schema__(
    cls, source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema:
    return no_info_after_validator_function(cls, handler(str))

__get_pydantic_json_schema__ classmethod ¤

__get_pydantic_json_schema__(
    core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35
36
37
38
39
40
41
42
43
44
@classmethod
def __get_pydantic_json_schema__(
    cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue:
    json_schema = cls.root_model.model_json_schema(mode=handler.mode)
    json_schema["title"] = cls.__name__.strip("_")
    if cls.__doc__:
        json_schema["description"] = cls.__doc__

    return json_schema

__new__ ¤

__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19
20
21
22
23
def __new__(cls, object: object):
    _validated = cls.root_model.model_validate(object).root
    self = super().__new__(cls, _validated)
    self._validated = _validated
    return self._after_validator()

Datetime ¤

Bases: RootModel[Annotated[datetime, BeforeValidator(_validate_datetime), PrettyPlainSerializer(_serialize_datetime_json, when_used='json-unless-none')]]


              flowchart TD
              bioimageio.spec.model.v0_5.Datetime[Datetime]

              

              click bioimageio.spec.model.v0_5.Datetime href "" "bioimageio.spec.model.v0_5.Datetime"
            

Timestamp in ISO 8601 format with a few restrictions listed here.

Methods:

Name Description
now

now classmethod ¤

now()
Source code in src/bioimageio/spec/_internal/types.py
135
136
137
@classmethod
def now(cls):
    return cls(datetime.now(UTC))

DeprecatedLicenseId ¤

Bases: ValidatedString


              flowchart TD
              bioimageio.spec.model.v0_5.DeprecatedLicenseId[DeprecatedLicenseId]
              bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]

                              bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.DeprecatedLicenseId
                


              click bioimageio.spec.model.v0_5.DeprecatedLicenseId href "" "bioimageio.spec.model.v0_5.DeprecatedLicenseId"
              click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
            

Methods:

Name Description
__get_pydantic_core_schema__
__get_pydantic_json_schema__
__new__

Attributes:

Name Type Description
root_model Type[RootModel[Any]]

the pydantic root model to validate the string

root_model class-attribute ¤

root_model: Type[RootModel[Any]] = RootModel[
    DeprecatedLicenseIdLiteral
]

the pydantic root model to validate the string

__get_pydantic_core_schema__ classmethod ¤

__get_pydantic_core_schema__(
    source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29
30
31
32
33
@classmethod
def __get_pydantic_core_schema__(
    cls, source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema:
    return no_info_after_validator_function(cls, handler(str))

__get_pydantic_json_schema__ classmethod ¤

__get_pydantic_json_schema__(
    core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35
36
37
38
39
40
41
42
43
44
@classmethod
def __get_pydantic_json_schema__(
    cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue:
    json_schema = cls.root_model.model_json_schema(mode=handler.mode)
    json_schema["title"] = cls.__name__.strip("_")
    if cls.__doc__:
        json_schema["description"] = cls.__doc__

    return json_schema

__new__ ¤

__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19
20
21
22
23
def __new__(cls, object: object):
    _validated = cls.root_model.model_validate(object).root
    self = super().__new__(cls, _validated)
    self._validated = _validated
    return self._after_validator()

Doi ¤

Bases: ValidatedString


              flowchart TD
              bioimageio.spec.model.v0_5.Doi[Doi]
              bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]

                              bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.Doi
                


              click bioimageio.spec.model.v0_5.Doi href "" "bioimageio.spec.model.v0_5.Doi"
              click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
            

A digital object identifier, see https://www.doi.org/

Methods:

Name Description
__get_pydantic_core_schema__
__get_pydantic_json_schema__
__new__

Attributes:

Name Type Description
root_model Type[RootModel[Any]]

the pydantic root model to validate the string

root_model class-attribute ¤

root_model: Type[RootModel[Any]] = RootModel[
    Annotated[str, StringConstraints(pattern=DOI_REGEX)]
]

the pydantic root model to validate the string

__get_pydantic_core_schema__ classmethod ¤

__get_pydantic_core_schema__(
    source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29
30
31
32
33
@classmethod
def __get_pydantic_core_schema__(
    cls, source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema:
    return no_info_after_validator_function(cls, handler(str))

__get_pydantic_json_schema__ classmethod ¤

__get_pydantic_json_schema__(
    core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35
36
37
38
39
40
41
42
43
44
@classmethod
def __get_pydantic_json_schema__(
    cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue:
    json_schema = cls.root_model.model_json_schema(mode=handler.mode)
    json_schema["title"] = cls.__name__.strip("_")
    if cls.__doc__:
        json_schema["description"] = cls.__doc__

    return json_schema

__new__ ¤

__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19
20
21
22
23
def __new__(cls, object: object):
    _validated = cls.root_model.model_validate(object).root
    self = super().__new__(cls, _validated)
    self._validated = _validated
    return self._after_validator()

EnsureDtypeDescr pydantic-model ¤

Bases: NodeWithExplicitlySetFields

Cast the tensor data type to EnsureDtypeKwargs.dtype (if not matching).

This can for example be used to ensure the inner neural network model gets a different input tensor data type than the fully described bioimage.io model does.

Examples:

The described bioimage.io model (incl. preprocessing) accepts any float32-compatible tensor, normalizes it with percentiles and clipping and then casts it to uint8, which is what the neural network in this example expects. - in YAML

inputs:
- data:
    type: float32  # described bioimage.io model is compatible with any float32 input tensor
  preprocessing:
  - id: scale_range
      kwargs:
      axes: ['y', 'x']
      max_percentile: 99.8
      min_percentile: 5.0
  - id: clip
      kwargs:
      min: 0.0
      max: 1.0
  - id: ensure_dtype  # the neural network of the model requires uint8
      kwargs:
      dtype: uint8
- in Python: >>> preprocessing = [ ... ScaleRangeDescr( ... kwargs=ScaleRangeKwargs( ... axes= (AxisId('y'), AxisId('x')), ... max_percentile= 99.8, ... min_percentile= 5.0, ... ) ... ), ... ClipDescr(kwargs=ClipKwargs(min=0.0, max=1.0)), ... EnsureDtypeDescr(kwargs=EnsureDtypeKwargs(dtype="uint8")), ... ]

Show JSON schema:
{
  "$defs": {
    "EnsureDtypeKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [EnsureDtypeDescr][]",
      "properties": {
        "dtype": {
          "enum": [
            "float32",
            "float64",
            "uint8",
            "int8",
            "uint16",
            "int16",
            "uint32",
            "int32",
            "uint64",
            "int64",
            "bool"
          ],
          "title": "Dtype",
          "type": "string"
        }
      },
      "required": [
        "dtype"
      ],
      "title": "model.v0_5.EnsureDtypeKwargs",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "description": "Cast the tensor data type to `EnsureDtypeKwargs.dtype` (if not matching).\n\nThis can for example be used to ensure the inner neural network model gets a\ndifferent input tensor data type than the fully described bioimage.io model does.\n\nExamples:\n    The described bioimage.io model (incl. preprocessing) accepts any\n    float32-compatible tensor, normalizes it with percentiles and clipping and then\n    casts it to uint8, which is what the neural network in this example expects.\n    - in YAML\n        ```yaml\n        inputs:\n        - data:\n            type: float32  # described bioimage.io model is compatible with any float32 input tensor\n          preprocessing:\n          - id: scale_range\n              kwargs:\n              axes: ['y', 'x']\n              max_percentile: 99.8\n              min_percentile: 5.0\n          - id: clip\n              kwargs:\n              min: 0.0\n              max: 1.0\n          - id: ensure_dtype  # the neural network of the model requires uint8\n              kwargs:\n              dtype: uint8\n        ```\n    - in Python:\n        >>> preprocessing = [\n        ...     ScaleRangeDescr(\n        ...         kwargs=ScaleRangeKwargs(\n        ...           axes= (AxisId('y'), AxisId('x')),\n        ...           max_percentile= 99.8,\n        ...           min_percentile= 5.0,\n        ...         )\n        ...     ),\n        ...     ClipDescr(kwargs=ClipKwargs(min=0.0, max=1.0)),\n        ...     EnsureDtypeDescr(kwargs=EnsureDtypeKwargs(dtype=\"uint8\")),\n        ... ]",
  "properties": {
    "id": {
      "const": "ensure_dtype",
      "title": "Id",
      "type": "string"
    },
    "kwargs": {
      "$ref": "#/$defs/EnsureDtypeKwargs"
    }
  },
  "required": [
    "id",
    "kwargs"
  ],
  "title": "model.v0_5.EnsureDtypeDescr",
  "type": "object"
}

Fields:

id pydantic-field ¤

id: Literal['ensure_dtype'] = 'ensure_dtype'

implemented_id class-attribute ¤

implemented_id: Literal['ensure_dtype'] = 'ensure_dtype'

kwargs pydantic-field ¤

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

EnsureDtypeKwargs pydantic-model ¤

Bases: KwargsNode

key word arguments for EnsureDtypeDescr

Show JSON schema:
{
  "additionalProperties": false,
  "description": "key word arguments for [EnsureDtypeDescr][]",
  "properties": {
    "dtype": {
      "enum": [
        "float32",
        "float64",
        "uint8",
        "int8",
        "uint16",
        "int16",
        "uint32",
        "int32",
        "uint64",
        "int64",
        "bool"
      ],
      "title": "Dtype",
      "type": "string"
    }
  },
  "required": [
    "dtype"
  ],
  "title": "model.v0_5.EnsureDtypeKwargs",
  "type": "object"
}

Fields:

  • dtype (Literal['float32', 'float64', 'uint8', 'int8', 'uint16', 'int16', 'uint32', 'int32', 'uint64', 'int64', 'bool'])

dtype pydantic-field ¤

dtype: Literal[
    "float32",
    "float64",
    "uint8",
    "int8",
    "uint16",
    "int16",
    "uint32",
    "int32",
    "uint64",
    "int64",
    "bool",
]

__contains__ ¤

__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
459
460
def __contains__(self, item: str) -> bool:
    return item in self.__class__.model_fields

__getitem__ ¤

__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
453
454
455
456
457
def __getitem__(self, item: str) -> Any:
    if item in self.__class__.model_fields:
        return getattr(self, item)
    else:
        raise KeyError(item)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get ¤

get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
450
451
def get(self, item: str, default: Any = None) -> Any:
    return self[item] if item in self else default

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

EnvironmentalImpact pydantic-model ¤

Bases: Node

Environmental considerations for model training and deployment.

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

Show JSON schema:
{
  "additionalProperties": true,
  "description": "Environmental considerations for model training and deployment.\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
  "properties": {
    "hardware_type": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "GPU/CPU specifications",
      "title": "Hardware Type"
    },
    "hours_used": {
      "anyOf": [
        {
          "type": "number"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Total compute hours",
      "title": "Hours Used"
    },
    "cloud_provider": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "If applicable",
      "title": "Cloud Provider"
    },
    "compute_region": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Geographic location",
      "title": "Compute Region"
    },
    "co2_emitted": {
      "anyOf": [
        {
          "type": "number"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "kg CO2 equivalent\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
      "title": "Co2 Emitted"
    }
  },
  "title": "model.v0_5.EnvironmentalImpact",
  "type": "object"
}

Fields:

cloud_provider pydantic-field ¤

cloud_provider: Optional[str] = None

If applicable

co2_emitted pydantic-field ¤

co2_emitted: Optional[float] = None

kg CO2 equivalent

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

compute_region pydantic-field ¤

compute_region: Optional[str] = None

Geographic location

hardware_type pydantic-field ¤

hardware_type: Optional[str] = None

GPU/CPU specifications

hours_used pydantic-field ¤

hours_used: Optional[float] = None

Total compute hours

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

format_md ¤

format_md()

Filled Markdown template section following Hugging Face Model Card Template.

Source code in src/bioimageio/spec/model/v0_5.py
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
def format_md(self):
    """Filled Markdown template section following [Hugging Face Model Card Template](https://huggingface.co/docs/hub/en/model-card-annotated)."""
    if self == self.__class__():
        return ""

    ret = "# Environmental Impact\n\n"
    if self.hardware_type is not None:
        ret += f"- **Hardware Type:** {self.hardware_type}\n"
    if self.hours_used is not None:
        ret += f"- **Hours used:** {self.hours_used}\n"
    if self.cloud_provider is not None:
        ret += f"- **Cloud Provider:** {self.cloud_provider}\n"
    if self.compute_region is not None:
        ret += f"- **Compute Region:** {self.compute_region}\n"
    if self.co2_emitted is not None:
        ret += f"- **Carbon Emitted:** {self.co2_emitted} kg CO2e\n"

    return ret + "\n"

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

Evaluation pydantic-model ¤

Bases: Node

Show JSON schema:
{
  "additionalProperties": true,
  "properties": {
    "model_id": {
      "anyOf": [
        {
          "minLength": 1,
          "title": "ModelId",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Model being evaluated.",
      "title": "Model Id"
    },
    "dataset_id": {
      "description": "Dataset used for evaluation.",
      "minLength": 1,
      "title": "DatasetId",
      "type": "string"
    },
    "dataset_source": {
      "description": "Source of the dataset.",
      "format": "uri",
      "maxLength": 2083,
      "minLength": 1,
      "title": "HttpUrl",
      "type": "string"
    },
    "dataset_role": {
      "description": "Role of the dataset used for evaluation.\n\n- `train`: dataset was (part of) the training data\n- `validation`: dataset was (part of) the validation data used during training, e.g. used for model selection or hyperparameter tuning\n- `test`: dataset was (part of) the designated test data; not used during training or validation, but acquired from the same source/distribution as training data\n- `independent`: dataset is entirely independent test data; not used during training or validation, and acquired from a different source/distribution than training data\n- `unknown`: role of the dataset is unknown; choose this if you are not certain if (a subset) of the data was seen by the model during training.",
      "enum": [
        "train",
        "validation",
        "test",
        "independent",
        "unknown"
      ],
      "title": "Dataset Role",
      "type": "string"
    },
    "sample_count": {
      "description": "Number of evaluated samples.",
      "title": "Sample Count",
      "type": "integer"
    },
    "evaluation_factors": {
      "description": "(Abbreviations of) each evaluation factor.\n\nEvaluation factors are criteria along which model performance is evaluated, e.g. different image conditions\nlike 'low SNR', 'high cell density', or different biological conditions like 'cell type A', 'cell type B'.\nAn 'overall' factor may be included to summarize performance across all conditions.",
      "items": {
        "maxLength": 16,
        "type": "string"
      },
      "title": "Evaluation Factors",
      "type": "array"
    },
    "evaluation_factors_long": {
      "description": "Descriptions (long form) of each evaluation factor.",
      "items": {
        "type": "string"
      },
      "title": "Evaluation Factors Long",
      "type": "array"
    },
    "metrics": {
      "description": "(Abbreviations of) metrics used for evaluation.",
      "items": {
        "maxLength": 16,
        "type": "string"
      },
      "title": "Metrics",
      "type": "array"
    },
    "metrics_long": {
      "description": "Description of each metric used.",
      "items": {
        "type": "string"
      },
      "title": "Metrics Long",
      "type": "array"
    },
    "results": {
      "description": "Results for each metric (rows; outer list) and each evaluation factor (columns; inner list).",
      "items": {
        "items": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "number"
            },
            {
              "type": "integer"
            }
          ]
        },
        "type": "array"
      },
      "title": "Results",
      "type": "array"
    },
    "results_summary": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Interpretation of results for general audience.\n\nConsider:\n    - Overall model performance\n    - Comparison to existing methods\n    - Limitations and areas for improvement",
      "title": "Results Summary"
    }
  },
  "required": [
    "dataset_id",
    "dataset_source",
    "dataset_role",
    "sample_count",
    "evaluation_factors",
    "evaluation_factors_long",
    "metrics",
    "metrics_long",
    "results"
  ],
  "title": "model.v0_5.Evaluation",
  "type": "object"
}

Fields:

Validators:

  • _validate_list_lengths

dataset_id pydantic-field ¤

dataset_id: DatasetId

Dataset used for evaluation.

dataset_role pydantic-field ¤

dataset_role: Literal[
    "train", "validation", "test", "independent", "unknown"
]

Role of the dataset used for evaluation.

  • train: dataset was (part of) the training data
  • validation: dataset was (part of) the validation data used during training, e.g. used for model selection or hyperparameter tuning
  • test: dataset was (part of) the designated test data; not used during training or validation, but acquired from the same source/distribution as training data
  • independent: dataset is entirely independent test data; not used during training or validation, and acquired from a different source/distribution than training data
  • unknown: role of the dataset is unknown; choose this if you are not certain if (a subset) of the data was seen by the model during training.

dataset_source pydantic-field ¤

dataset_source: HttpUrl

Source of the dataset.

evaluation_factors pydantic-field ¤

evaluation_factors: List[Annotated[str, MaxLen(16)]]

(Abbreviations of) each evaluation factor.

Evaluation factors are criteria along which model performance is evaluated, e.g. different image conditions like 'low SNR', 'high cell density', or different biological conditions like 'cell type A', 'cell type B'. An 'overall' factor may be included to summarize performance across all conditions.

evaluation_factors_long pydantic-field ¤

evaluation_factors_long: List[str]

Descriptions (long form) of each evaluation factor.

metrics pydantic-field ¤

metrics: List[Annotated[str, MaxLen(16)]]

(Abbreviations of) metrics used for evaluation.

metrics_long pydantic-field ¤

metrics_long: List[str]

Description of each metric used.

model_id pydantic-field ¤

model_id: Optional[ModelId] = None

Model being evaluated.

results pydantic-field ¤

results: List[List[Union[str, float, int]]]

Results for each metric (rows; outer list) and each evaluation factor (columns; inner list).

results_summary pydantic-field ¤

results_summary: Optional[str] = None

Interpretation of results for general audience.

Consider
  • Overall model performance
  • Comparison to existing methods
  • Limitations and areas for improvement

sample_count pydantic-field ¤

sample_count: int

Number of evaluated samples.

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

format_md ¤

format_md()
Source code in src/bioimageio/spec/model/v0_5.py
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
    def format_md(self):
        results_header = ["Metric"] + self.evaluation_factors
        results_table_cells = [results_header, ["---"] * len(results_header)] + [
            [metric] + [str(r) for r in row]
            for metric, row in zip(self.metrics, self.results)
        ]

        results_table = "".join(
            "| " + " | ".join(row) + " |\n" for row in results_table_cells
        )
        factors = "".join(
            f"\n - {ef}: {efl}"
            for ef, efl in zip(self.evaluation_factors, self.evaluation_factors_long)
        )
        metrics = "".join(
            f"\n - {em}: {eml}" for em, eml in zip(self.metrics, self.metrics_long)
        )

        return f"""## Testing Data, Factors & Metrics

Evaluation of {self.model_id or "this"} model on the {self.dataset_id} dataset (dataset role: {self.dataset_role}).

### Testing Data

- **Source:** [{self.dataset_id}]({self.dataset_source})
- **Size:** {self.sample_count} evaluated samples

### Factors
{factors}

### Metrics
{metrics}

## Results

### Quantitative Results

{results_table}

### Summary

{self.results_summary or "missing"}

"""

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

FileDescr pydantic-model ¤

Bases: Node

A file description

Show JSON schema:
{
  "$defs": {
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    }
  },
  "additionalProperties": false,
  "description": "A file description",
  "properties": {
    "source": {
      "anyOf": [
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "$ref": "#/$defs/RelativeFilePath"
        },
        {
          "format": "file-path",
          "title": "FilePath",
          "type": "string"
        }
      ],
      "description": "File source",
      "title": "Source"
    },
    "sha256": {
      "anyOf": [
        {
          "description": "A SHA-256 hash value",
          "maxLength": 64,
          "minLength": 64,
          "title": "Sha256",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "SHA256 hash value of the **source** file.",
      "title": "Sha256"
    }
  },
  "required": [
    "source"
  ],
  "title": "_internal.io.FileDescr",
  "type": "object"
}

Fields:

Validators:

  • _validate_sha256

sha256 pydantic-field ¤

sha256: Optional[Sha256] = None

SHA256 hash value of the source file.

source pydantic-field ¤

source: FileSource

File source

suffix property ¤

suffix: str

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

download ¤

download(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

alias for .get_reader

Source code in src/bioimageio/spec/_internal/io.py
319
320
321
322
323
324
325
326
327
def download(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """alias for `.get_reader`"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

get_reader ¤

get_reader(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

open the file source (download if needed)

Source code in src/bioimageio/spec/_internal/io.py
309
310
311
312
313
314
315
316
317
def get_reader(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """open the file source (download if needed)"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

validate_sha256 ¤

validate_sha256(force_recompute: bool = False) -> None

validate the sha256 hash value of the source file

Source code in src/bioimageio/spec/_internal/io.py
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def validate_sha256(self, force_recompute: bool = False) -> None:
    """validate the sha256 hash value of the **source** file"""
    context = get_validation_context()
    src_str = str(self.source)
    if force_recompute:
        actual_sha = None
    else:
        actual_sha = context.known_files.get(src_str)

    if actual_sha is None:
        if context.perform_io_checks or force_recompute:
            reader = get_reader(self.source, sha256=self.sha256)
            if force_recompute:
                actual_sha = get_sha256(reader)
            else:
                actual_sha = reader.sha256

            context.known_files[src_str] = actual_sha
        elif context.known_files and src_str not in context.known_files:
            # perform_io_checks is False, but known files were given,
            # so we expect all file references to be in there
            raise ValueError(f"File {src_str} not found in `known_files`.")

    if actual_sha is None or self.sha256 == actual_sha:
        return
    elif self.sha256 is None or context.update_hashes:
        self.sha256 = actual_sha
    elif self.sha256 != actual_sha:
        raise ValueError(
            f"Sha256 mismatch for {self.source}. Expected {self.sha256}, got "
            + f"{actual_sha}. Update expected `sha256` or point to the matching "
            + "file."
        )

FixedZeroMeanUnitVarianceAlongAxisKwargs pydantic-model ¤

Bases: KwargsNode

key word arguments for FixedZeroMeanUnitVarianceDescr

Show JSON schema:
{
  "additionalProperties": false,
  "description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
  "properties": {
    "mean": {
      "description": "The mean value(s) to normalize with.",
      "items": {
        "type": "number"
      },
      "minItems": 1,
      "title": "Mean",
      "type": "array"
    },
    "std": {
      "description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
      "items": {
        "minimum": 1e-06,
        "type": "number"
      },
      "minItems": 1,
      "title": "Std",
      "type": "array"
    },
    "axis": {
      "description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
      "examples": [
        "channel",
        "index"
      ],
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    }
  },
  "required": [
    "mean",
    "std",
    "axis"
  ],
  "title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
  "type": "object"
}

Fields:

Validators:

  • _mean_and_std_match

axis pydantic-field ¤

axis: Annotated[
    NonBatchAxisId, Field(examples=["channel", "index"])
]

The axis of the mean/std values to normalize each entry along that dimension separately.

mean pydantic-field ¤

mean: NotEmpty[List[float]]

The mean value(s) to normalize with.

std pydantic-field ¤

std: NotEmpty[List[Annotated[float, Ge(1e-06)]]]

The standard deviation value(s) to normalize with. Size must match mean values.

__contains__ ¤

__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
459
460
def __contains__(self, item: str) -> bool:
    return item in self.__class__.model_fields

__getitem__ ¤

__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
453
454
455
456
457
def __getitem__(self, item: str) -> Any:
    if item in self.__class__.model_fields:
        return getattr(self, item)
    else:
        raise KeyError(item)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get ¤

get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
450
451
def get(self, item: str, default: Any = None) -> Any:
    return self[item] if item in self else default

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

FixedZeroMeanUnitVarianceDescr pydantic-model ¤

Bases: NodeWithExplicitlySetFields

Subtract a given mean and divide by the standard deviation.

Normalize with fixed, precomputed values for FixedZeroMeanUnitVarianceKwargs.mean and FixedZeroMeanUnitVarianceKwargs.std Use FixedZeroMeanUnitVarianceAlongAxisKwargs for independent scaling along given axes.

Examples:

  1. scalar value for whole tensor

    • in YAML
      preprocessing:
        - id: fixed_zero_mean_unit_variance
          kwargs:
            mean: 103.5
            std: 13.7
      
    • in Python

      preprocessing = [FixedZeroMeanUnitVarianceDescr( ... kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7) ... )]

  2. independently along an axis

    • in YAML
      preprocessing:
        - id: fixed_zero_mean_unit_variance
          kwargs:
            axis: channel
            mean: [101.5, 102.5, 103.5]
            std: [11.7, 12.7, 13.7]
      
    • in Python

      preprocessing = [FixedZeroMeanUnitVarianceDescr( ... kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs( ... axis=AxisId("channel"), ... mean=[101.5, 102.5, 103.5], ... std=[11.7, 12.7, 13.7], ... ) ... )]

Show JSON schema:
{
  "$defs": {
    "FixedZeroMeanUnitVarianceAlongAxisKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
      "properties": {
        "mean": {
          "description": "The mean value(s) to normalize with.",
          "items": {
            "type": "number"
          },
          "minItems": 1,
          "title": "Mean",
          "type": "array"
        },
        "std": {
          "description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
          "items": {
            "minimum": 1e-06,
            "type": "number"
          },
          "minItems": 1,
          "title": "Std",
          "type": "array"
        },
        "axis": {
          "description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
          "examples": [
            "channel",
            "index"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        }
      },
      "required": [
        "mean",
        "std",
        "axis"
      ],
      "title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
      "type": "object"
    },
    "FixedZeroMeanUnitVarianceKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
      "properties": {
        "mean": {
          "description": "The mean value to normalize with.",
          "title": "Mean",
          "type": "number"
        },
        "std": {
          "description": "The standard deviation value to normalize with.",
          "minimum": 1e-06,
          "title": "Std",
          "type": "number"
        }
      },
      "required": [
        "mean",
        "std"
      ],
      "title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "description": "Subtract a given mean and divide by the standard deviation.\n\nNormalize with fixed, precomputed values for\n`FixedZeroMeanUnitVarianceKwargs.mean` and `FixedZeroMeanUnitVarianceKwargs.std`\nUse `FixedZeroMeanUnitVarianceAlongAxisKwargs` for independent scaling along given\naxes.\n\nExamples:\n1. scalar value for whole tensor\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: fixed_zero_mean_unit_variance\n        kwargs:\n          mean: 103.5\n          std: 13.7\n    ```\n    - in Python\n    >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n    ...   kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7)\n    ... )]\n\n2. independently along an axis\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: fixed_zero_mean_unit_variance\n        kwargs:\n          axis: channel\n          mean: [101.5, 102.5, 103.5]\n          std: [11.7, 12.7, 13.7]\n    ```\n    - in Python\n    >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n    ...   kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs(\n    ...     axis=AxisId(\"channel\"),\n    ...     mean=[101.5, 102.5, 103.5],\n    ...     std=[11.7, 12.7, 13.7],\n    ...   )\n    ... )]",
  "properties": {
    "id": {
      "const": "fixed_zero_mean_unit_variance",
      "title": "Id",
      "type": "string"
    },
    "kwargs": {
      "anyOf": [
        {
          "$ref": "#/$defs/FixedZeroMeanUnitVarianceKwargs"
        },
        {
          "$ref": "#/$defs/FixedZeroMeanUnitVarianceAlongAxisKwargs"
        }
      ],
      "title": "Kwargs"
    }
  },
  "required": [
    "id",
    "kwargs"
  ],
  "title": "model.v0_5.FixedZeroMeanUnitVarianceDescr",
  "type": "object"
}

Fields:

id pydantic-field ¤

id: Literal["fixed_zero_mean_unit_variance"] = (
    "fixed_zero_mean_unit_variance"
)

implemented_id class-attribute ¤

implemented_id: Literal["fixed_zero_mean_unit_variance"] = (
    "fixed_zero_mean_unit_variance"
)

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

FixedZeroMeanUnitVarianceKwargs pydantic-model ¤

Bases: KwargsNode

key word arguments for FixedZeroMeanUnitVarianceDescr

Show JSON schema:
{
  "additionalProperties": false,
  "description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
  "properties": {
    "mean": {
      "description": "The mean value to normalize with.",
      "title": "Mean",
      "type": "number"
    },
    "std": {
      "description": "The standard deviation value to normalize with.",
      "minimum": 1e-06,
      "title": "Std",
      "type": "number"
    }
  },
  "required": [
    "mean",
    "std"
  ],
  "title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
  "type": "object"
}

Fields:

  • mean (float)
  • std (Annotated[float, Ge(1e-06)])

mean pydantic-field ¤

mean: float

The mean value to normalize with.

std pydantic-field ¤

std: Annotated[float, Ge(1e-06)]

The standard deviation value to normalize with.

__contains__ ¤

__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
459
460
def __contains__(self, item: str) -> bool:
    return item in self.__class__.model_fields

__getitem__ ¤

__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
453
454
455
456
457
def __getitem__(self, item: str) -> Any:
    if item in self.__class__.model_fields:
        return getattr(self, item)
    else:
        raise KeyError(item)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get ¤

get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
450
451
def get(self, item: str, default: Any = None) -> Any:
    return self[item] if item in self else default

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

HttpUrl ¤

Bases: RootHttpUrl


              flowchart TD
              bioimageio.spec.model.v0_5.HttpUrl[HttpUrl]
              bioimageio.spec._internal.root_url.RootHttpUrl[RootHttpUrl]
              bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]

                              bioimageio.spec._internal.root_url.RootHttpUrl --> bioimageio.spec.model.v0_5.HttpUrl
                                bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.root_url.RootHttpUrl
                



              click bioimageio.spec.model.v0_5.HttpUrl href "" "bioimageio.spec.model.v0_5.HttpUrl"
              click bioimageio.spec._internal.root_url.RootHttpUrl href "" "bioimageio.spec._internal.root_url.RootHttpUrl"
              click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
            

A URL with the HTTP or HTTPS scheme.

Methods:

Name Description
__get_pydantic_core_schema__
__get_pydantic_json_schema__
__new__
__truediv__
absolute

analog to absolute method of pathlib.

exists

True if URL is available

Attributes:

Name Type Description
host Optional[str]
parent RootHttpUrl
parents Iterable[RootHttpUrl]

iterate over all URL parents (max 100)

path Optional[str]
root_model Type[RootModel[Any]]

the pydantic root model to validate the string

scheme str
suffix str

host property ¤

host: Optional[str]

parent property ¤

parent: RootHttpUrl

parents property ¤

parents: Iterable[RootHttpUrl]

iterate over all URL parents (max 100)

path property ¤

path: Optional[str]

root_model class-attribute ¤

root_model: Type[RootModel[Any]] = RootModel[
    pydantic.HttpUrl
]

the pydantic root model to validate the string

scheme property ¤

scheme: str

suffix property ¤

suffix: str

__get_pydantic_core_schema__ classmethod ¤

__get_pydantic_core_schema__(
    source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29
30
31
32
33
@classmethod
def __get_pydantic_core_schema__(
    cls, source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema:
    return no_info_after_validator_function(cls, handler(str))

__get_pydantic_json_schema__ classmethod ¤

__get_pydantic_json_schema__(
    core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35
36
37
38
39
40
41
42
43
44
@classmethod
def __get_pydantic_json_schema__(
    cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue:
    json_schema = cls.root_model.model_json_schema(mode=handler.mode)
    json_schema["title"] = cls.__name__.strip("_")
    if cls.__doc__:
        json_schema["description"] = cls.__doc__

    return json_schema

__new__ ¤

__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19
20
21
22
23
def __new__(cls, object: object):
    _validated = cls.root_model.model_validate(object).root
    self = super().__new__(cls, _validated)
    self._validated = _validated
    return self._after_validator()

__truediv__ ¤

__truediv__(other: str) -> RootHttpUrl
Source code in src/bioimageio/spec/_internal/root_url.py
67
68
69
70
71
72
73
74
75
76
77
78
79
def __truediv__(self, other: str) -> RootHttpUrl:
    parsed = urlsplit(str(self))
    return RootHttpUrl(
        urlunsplit(
            (
                parsed.scheme,
                parsed.netloc,
                f"{parsed.path.strip('/')}/{other.strip('/')}",
                parsed.query,
                parsed.fragment,
            )
        )
    )

absolute ¤

absolute()

analog to absolute method of pathlib.

Source code in src/bioimageio/spec/_internal/root_url.py
18
19
20
def absolute(self):
    """analog to `absolute` method of pathlib."""
    return self

exists ¤

exists()

True if URL is available

Source code in src/bioimageio/spec/_internal/url.py
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
def exists(self):
    """True if URL is available"""
    if self._exists is None:
        ctxt = get_validation_context()
        try:
            with ctxt.replace(warning_level=warning_levels.WARNING):
                self._validated = _validate_url(self._validated)
        except Exception as e:
            if ctxt.log_warnings:
                logger.info(e)

            self._exists = False
        else:
            self._exists = True

    return self._exists

Identifier ¤

Bases: ValidatedString


              flowchart TD
              bioimageio.spec.model.v0_5.Identifier[Identifier]
              bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]

                              bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.Identifier
                


              click bioimageio.spec.model.v0_5.Identifier href "" "bioimageio.spec.model.v0_5.Identifier"
              click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
            

Methods:

Name Description
__get_pydantic_core_schema__
__get_pydantic_json_schema__
__new__

Attributes:

Name Type Description
root_model Type[RootModel[Any]]

the pydantic root model to validate the string

root_model class-attribute ¤

root_model: Type[RootModel[Any]] = RootModel[IdentifierAnno]

the pydantic root model to validate the string

__get_pydantic_core_schema__ classmethod ¤

__get_pydantic_core_schema__(
    source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29
30
31
32
33
@classmethod
def __get_pydantic_core_schema__(
    cls, source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema:
    return no_info_after_validator_function(cls, handler(str))

__get_pydantic_json_schema__ classmethod ¤

__get_pydantic_json_schema__(
    core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35
36
37
38
39
40
41
42
43
44
@classmethod
def __get_pydantic_json_schema__(
    cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue:
    json_schema = cls.root_model.model_json_schema(mode=handler.mode)
    json_schema["title"] = cls.__name__.strip("_")
    if cls.__doc__:
        json_schema["description"] = cls.__doc__

    return json_schema

__new__ ¤

__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19
20
21
22
23
def __new__(cls, object: object):
    _validated = cls.root_model.model_validate(object).root
    self = super().__new__(cls, _validated)
    self._validated = _validated
    return self._after_validator()

IndexAxisBase pydantic-model ¤

Bases: AxisBase

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "id": {
      "default": "index",
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A short description of this axis beyond its type and id.",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "type": {
      "const": "index",
      "title": "Type",
      "type": "string"
    }
  },
  "required": [
    "type"
  ],
  "title": "model.v0_5.IndexAxisBase",
  "type": "object"
}

Fields:

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

A short description of this axis beyond its type and id.

id pydantic-field ¤

An axis id unique across all axes of one tensor.

implemented_type class-attribute ¤

implemented_type: Literal['index'] = 'index'

scale property ¤

scale: float

type pydantic-field ¤

type: Literal['index'] = 'index'

unit property ¤

unit

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

IndexInputAxis pydantic-model ¤

Bases: IndexAxisBase, _WithInputAxisSize

Show JSON schema:
{
  "$defs": {
    "ParameterizedSize": {
      "additionalProperties": false,
      "description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n  This allows to adjust the axis size more generically.",
      "properties": {
        "min": {
          "exclusiveMinimum": 0,
          "title": "Min",
          "type": "integer"
        },
        "step": {
          "exclusiveMinimum": 0,
          "title": "Step",
          "type": "integer"
        }
      },
      "required": [
        "min",
        "step"
      ],
      "title": "model.v0_5.ParameterizedSize",
      "type": "object"
    },
    "SizeReference": {
      "additionalProperties": false,
      "description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n    `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n...     id=AxisId(\"h\"),\n...     size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n...     unit=\"millimeter\",\n...     scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
      "properties": {
        "tensor_id": {
          "description": "tensor id of the reference axis",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axis_id": {
          "description": "axis id of the reference axis",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "offset": {
          "default": 0,
          "title": "Offset",
          "type": "integer"
        }
      },
      "required": [
        "tensor_id",
        "axis_id"
      ],
      "title": "model.v0_5.SizeReference",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "properties": {
    "size": {
      "anyOf": [
        {
          "exclusiveMinimum": 0,
          "type": "integer"
        },
        {
          "$ref": "#/$defs/ParameterizedSize"
        },
        {
          "$ref": "#/$defs/SizeReference"
        }
      ],
      "description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
      "examples": [
        10,
        {
          "min": 32,
          "step": 16
        },
        {
          "axis_id": "a",
          "offset": 5,
          "tensor_id": "t"
        }
      ],
      "title": "Size"
    },
    "id": {
      "default": "index",
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A short description of this axis beyond its type and id.",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "type": {
      "const": "index",
      "title": "Type",
      "type": "string"
    },
    "concatenable": {
      "default": false,
      "description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
      "title": "Concatenable",
      "type": "boolean"
    }
  },
  "required": [
    "size",
    "type"
  ],
  "title": "model.v0_5.IndexInputAxis",
  "type": "object"
}

Fields:

concatenable pydantic-field ¤

concatenable: bool = False

If a model has a concatenable input axis, it can be processed blockwise, splitting a longer sample axis into blocks matching its input tensor description. Output axes are concatenable if they have a SizeReference to a concatenable input axis.

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

A short description of this axis beyond its type and id.

id pydantic-field ¤

An axis id unique across all axes of one tensor.

implemented_type class-attribute ¤

implemented_type: Literal['index'] = 'index'

scale property ¤

scale: float

size pydantic-field ¤

size: Annotated[
    Union[
        Annotated[int, Gt(0)],
        ParameterizedSize,
        SizeReference,
    ],
    Field(
        examples=[
            10,
            ParameterizedSize(min=32, step=16).model_dump(
                mode="json"
            ),
            SizeReference(
                tensor_id=TensorId("t"),
                axis_id=AxisId("a"),
                offset=5,
            ).model_dump(mode="json"),
        ]
    ),
]

The size/length of this axis can be specified as - fixed integer - parameterized series of valid sizes (ParameterizedSize) - reference to another axis with an optional offset (SizeReference)

type pydantic-field ¤

type: Literal['index'] = 'index'

unit property ¤

unit

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

IndexOutputAxis pydantic-model ¤

Bases: IndexAxisBase

Show JSON schema:
{
  "$defs": {
    "DataDependentSize": {
      "additionalProperties": false,
      "properties": {
        "min": {
          "default": 1,
          "exclusiveMinimum": 0,
          "title": "Min",
          "type": "integer"
        },
        "max": {
          "anyOf": [
            {
              "exclusiveMinimum": 1,
              "type": "integer"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Max"
        }
      },
      "title": "model.v0_5.DataDependentSize",
      "type": "object"
    },
    "SizeReference": {
      "additionalProperties": false,
      "description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n    `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n...     id=AxisId(\"h\"),\n...     size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n...     unit=\"millimeter\",\n...     scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
      "properties": {
        "tensor_id": {
          "description": "tensor id of the reference axis",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axis_id": {
          "description": "axis id of the reference axis",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "offset": {
          "default": 0,
          "title": "Offset",
          "type": "integer"
        }
      },
      "required": [
        "tensor_id",
        "axis_id"
      ],
      "title": "model.v0_5.SizeReference",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "properties": {
    "id": {
      "default": "index",
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A short description of this axis beyond its type and id.",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "type": {
      "const": "index",
      "title": "Type",
      "type": "string"
    },
    "size": {
      "anyOf": [
        {
          "exclusiveMinimum": 0,
          "type": "integer"
        },
        {
          "$ref": "#/$defs/SizeReference"
        },
        {
          "$ref": "#/$defs/DataDependentSize"
        }
      ],
      "description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset ([SizeReference][])\n- data dependent size using [DataDependentSize][] (size is only known after model inference)",
      "examples": [
        10,
        {
          "axis_id": "a",
          "offset": 5,
          "tensor_id": "t"
        }
      ],
      "title": "Size"
    }
  },
  "required": [
    "type",
    "size"
  ],
  "title": "model.v0_5.IndexOutputAxis",
  "type": "object"
}

Fields:

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

A short description of this axis beyond its type and id.

id pydantic-field ¤

An axis id unique across all axes of one tensor.

implemented_type class-attribute ¤

implemented_type: Literal['index'] = 'index'

scale property ¤

scale: float

size pydantic-field ¤

size: Annotated[
    Union[
        Annotated[int, Gt(0)],
        SizeReference,
        DataDependentSize,
    ],
    Field(
        examples=[
            10,
            SizeReference(
                tensor_id=TensorId("t"),
                axis_id=AxisId("a"),
                offset=5,
            ).model_dump(mode="json"),
        ]
    ),
]

The size/length of this axis can be specified as - fixed integer - reference to another axis with an optional offset (SizeReference) - data dependent size using DataDependentSize (size is only known after model inference)

type pydantic-field ¤

type: Literal['index'] = 'index'

unit property ¤

unit

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

InputTensorDescr pydantic-model ¤

Bases: TensorDescrBase[InputAxis]

Show JSON schema:
{
  "$defs": {
    "BatchAxis": {
      "additionalProperties": false,
      "properties": {
        "id": {
          "default": "batch",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "batch",
          "title": "Type",
          "type": "string"
        },
        "size": {
          "anyOf": [
            {
              "const": 1,
              "type": "integer"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
          "title": "Size"
        }
      },
      "required": [
        "type"
      ],
      "title": "model.v0_5.BatchAxis",
      "type": "object"
    },
    "BinarizeAlongAxisKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [BinarizeDescr][]",
      "properties": {
        "threshold": {
          "description": "The fixed threshold values along `axis`",
          "items": {
            "type": "number"
          },
          "minItems": 1,
          "title": "Threshold",
          "type": "array"
        },
        "axis": {
          "description": "The `threshold` axis",
          "examples": [
            "channel"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        }
      },
      "required": [
        "threshold",
        "axis"
      ],
      "title": "model.v0_5.BinarizeAlongAxisKwargs",
      "type": "object"
    },
    "BinarizeDescr": {
      "additionalProperties": false,
      "description": "Binarize the tensor with a fixed threshold.\n\nValues above [BinarizeKwargs.threshold][]/[BinarizeAlongAxisKwargs.threshold][]\nwill be set to one, values below the threshold to zero.\n\nExamples:\n- in YAML\n    ```yaml\n    postprocessing:\n      - id: binarize\n        kwargs:\n          axis: 'channel'\n          threshold: [0.25, 0.5, 0.75]\n    ```\n- in Python:\n\n    >>> postprocessing = [BinarizeDescr(\n    ...   kwargs=BinarizeAlongAxisKwargs(\n    ...       axis=AxisId('channel'),\n    ...       threshold=[0.25, 0.5, 0.75],\n    ...   )\n    ... )]",
      "properties": {
        "id": {
          "const": "binarize",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "anyOf": [
            {
              "$ref": "#/$defs/BinarizeKwargs"
            },
            {
              "$ref": "#/$defs/BinarizeAlongAxisKwargs"
            }
          ],
          "title": "Kwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.BinarizeDescr",
      "type": "object"
    },
    "BinarizeKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [BinarizeDescr][]",
      "properties": {
        "threshold": {
          "description": "The fixed threshold",
          "title": "Threshold",
          "type": "number"
        }
      },
      "required": [
        "threshold"
      ],
      "title": "model.v0_5.BinarizeKwargs",
      "type": "object"
    },
    "ChannelAxis": {
      "additionalProperties": false,
      "properties": {
        "id": {
          "default": "channel",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "channel",
          "title": "Type",
          "type": "string"
        },
        "channel_names": {
          "items": {
            "minLength": 1,
            "title": "Identifier",
            "type": "string"
          },
          "minItems": 1,
          "title": "Channel Names",
          "type": "array"
        }
      },
      "required": [
        "type",
        "channel_names"
      ],
      "title": "model.v0_5.ChannelAxis",
      "type": "object"
    },
    "ClipDescr": {
      "additionalProperties": false,
      "description": "Set tensor values below min to min and above max to max.\n\nSee `ScaleRangeDescr` for examples.",
      "properties": {
        "id": {
          "const": "clip",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/ClipKwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.ClipDescr",
      "type": "object"
    },
    "ClipKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ClipDescr][]",
      "properties": {
        "min": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Minimum value for clipping.\n\nExclusive with [min_percentile][]",
          "title": "Min"
        },
        "min_percentile": {
          "anyOf": [
            {
              "exclusiveMaximum": 100,
              "minimum": 0,
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Minimum percentile for clipping.\n\nExclusive with [min][].\n\nIn range [0, 100).",
          "title": "Min Percentile"
        },
        "max": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Maximum value for clipping.\n\nExclusive with `max_percentile`.",
          "title": "Max"
        },
        "max_percentile": {
          "anyOf": [
            {
              "exclusiveMinimum": 1,
              "maximum": 100,
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Maximum percentile for clipping.\n\nExclusive with `max`.\n\nIn range (1, 100].",
          "title": "Max Percentile"
        },
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to determine percentiles jointly,\n\ni.e. axes to reduce to compute min/max from `min_percentile`/`max_percentile`.\nFor example to clip 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape with clipped values per channel, specify `axes=('batch', 'x', 'y')`.\nTo clip samples independently, leave out the 'batch' axis.\n\nOnly valid if `min_percentile` and/or `max_percentile` are set.\n\nDefault: Compute percentiles over all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        }
      },
      "title": "model.v0_5.ClipKwargs",
      "type": "object"
    },
    "EnsureDtypeDescr": {
      "additionalProperties": false,
      "description": "Cast the tensor data type to `EnsureDtypeKwargs.dtype` (if not matching).\n\nThis can for example be used to ensure the inner neural network model gets a\ndifferent input tensor data type than the fully described bioimage.io model does.\n\nExamples:\n    The described bioimage.io model (incl. preprocessing) accepts any\n    float32-compatible tensor, normalizes it with percentiles and clipping and then\n    casts it to uint8, which is what the neural network in this example expects.\n    - in YAML\n        ```yaml\n        inputs:\n        - data:\n            type: float32  # described bioimage.io model is compatible with any float32 input tensor\n          preprocessing:\n          - id: scale_range\n              kwargs:\n              axes: ['y', 'x']\n              max_percentile: 99.8\n              min_percentile: 5.0\n          - id: clip\n              kwargs:\n              min: 0.0\n              max: 1.0\n          - id: ensure_dtype  # the neural network of the model requires uint8\n              kwargs:\n              dtype: uint8\n        ```\n    - in Python:\n        >>> preprocessing = [\n        ...     ScaleRangeDescr(\n        ...         kwargs=ScaleRangeKwargs(\n        ...           axes= (AxisId('y'), AxisId('x')),\n        ...           max_percentile= 99.8,\n        ...           min_percentile= 5.0,\n        ...         )\n        ...     ),\n        ...     ClipDescr(kwargs=ClipKwargs(min=0.0, max=1.0)),\n        ...     EnsureDtypeDescr(kwargs=EnsureDtypeKwargs(dtype=\"uint8\")),\n        ... ]",
      "properties": {
        "id": {
          "const": "ensure_dtype",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/EnsureDtypeKwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.EnsureDtypeDescr",
      "type": "object"
    },
    "EnsureDtypeKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [EnsureDtypeDescr][]",
      "properties": {
        "dtype": {
          "enum": [
            "float32",
            "float64",
            "uint8",
            "int8",
            "uint16",
            "int16",
            "uint32",
            "int32",
            "uint64",
            "int64",
            "bool"
          ],
          "title": "Dtype",
          "type": "string"
        }
      },
      "required": [
        "dtype"
      ],
      "title": "model.v0_5.EnsureDtypeKwargs",
      "type": "object"
    },
    "FileDescr": {
      "additionalProperties": false,
      "description": "A file description",
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "File source",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        }
      },
      "required": [
        "source"
      ],
      "title": "_internal.io.FileDescr",
      "type": "object"
    },
    "FixedZeroMeanUnitVarianceAlongAxisKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
      "properties": {
        "mean": {
          "description": "The mean value(s) to normalize with.",
          "items": {
            "type": "number"
          },
          "minItems": 1,
          "title": "Mean",
          "type": "array"
        },
        "std": {
          "description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
          "items": {
            "minimum": 1e-06,
            "type": "number"
          },
          "minItems": 1,
          "title": "Std",
          "type": "array"
        },
        "axis": {
          "description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
          "examples": [
            "channel",
            "index"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        }
      },
      "required": [
        "mean",
        "std",
        "axis"
      ],
      "title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
      "type": "object"
    },
    "FixedZeroMeanUnitVarianceDescr": {
      "additionalProperties": false,
      "description": "Subtract a given mean and divide by the standard deviation.\n\nNormalize with fixed, precomputed values for\n`FixedZeroMeanUnitVarianceKwargs.mean` and `FixedZeroMeanUnitVarianceKwargs.std`\nUse `FixedZeroMeanUnitVarianceAlongAxisKwargs` for independent scaling along given\naxes.\n\nExamples:\n1. scalar value for whole tensor\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: fixed_zero_mean_unit_variance\n        kwargs:\n          mean: 103.5\n          std: 13.7\n    ```\n    - in Python\n    >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n    ...   kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7)\n    ... )]\n\n2. independently along an axis\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: fixed_zero_mean_unit_variance\n        kwargs:\n          axis: channel\n          mean: [101.5, 102.5, 103.5]\n          std: [11.7, 12.7, 13.7]\n    ```\n    - in Python\n    >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n    ...   kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs(\n    ...     axis=AxisId(\"channel\"),\n    ...     mean=[101.5, 102.5, 103.5],\n    ...     std=[11.7, 12.7, 13.7],\n    ...   )\n    ... )]",
      "properties": {
        "id": {
          "const": "fixed_zero_mean_unit_variance",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "anyOf": [
            {
              "$ref": "#/$defs/FixedZeroMeanUnitVarianceKwargs"
            },
            {
              "$ref": "#/$defs/FixedZeroMeanUnitVarianceAlongAxisKwargs"
            }
          ],
          "title": "Kwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.FixedZeroMeanUnitVarianceDescr",
      "type": "object"
    },
    "FixedZeroMeanUnitVarianceKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
      "properties": {
        "mean": {
          "description": "The mean value to normalize with.",
          "title": "Mean",
          "type": "number"
        },
        "std": {
          "description": "The standard deviation value to normalize with.",
          "minimum": 1e-06,
          "title": "Std",
          "type": "number"
        }
      },
      "required": [
        "mean",
        "std"
      ],
      "title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
      "type": "object"
    },
    "IndexInputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/ParameterizedSize"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
          "examples": [
            10,
            {
              "min": 32,
              "step": 16
            },
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "index",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "index",
          "title": "Type",
          "type": "string"
        },
        "concatenable": {
          "default": false,
          "description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
          "title": "Concatenable",
          "type": "boolean"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.IndexInputAxis",
      "type": "object"
    },
    "IntervalOrRatioDataDescr": {
      "additionalProperties": false,
      "properties": {
        "type": {
          "default": "float32",
          "enum": [
            "float32",
            "float64",
            "uint8",
            "int8",
            "uint16",
            "int16",
            "uint32",
            "int32",
            "uint64",
            "int64"
          ],
          "examples": [
            "float32",
            "float64",
            "uint8",
            "uint16"
          ],
          "title": "Type",
          "type": "string"
        },
        "range": {
          "default": [
            null,
            null
          ],
          "description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
          "maxItems": 2,
          "minItems": 2,
          "prefixItems": [
            {
              "anyOf": [
                {
                  "type": "number"
                },
                {
                  "type": "null"
                }
              ]
            },
            {
              "anyOf": [
                {
                  "type": "number"
                },
                {
                  "type": "null"
                }
              ]
            }
          ],
          "title": "Range",
          "type": "array"
        },
        "unit": {
          "anyOf": [
            {
              "const": "arbitrary unit",
              "type": "string"
            },
            {
              "description": "An SI unit",
              "minLength": 1,
              "pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
              "title": "SiUnit",
              "type": "string"
            }
          ],
          "default": "arbitrary unit",
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "description": "Scale for data on an interval (or ratio) scale.",
          "title": "Scale",
          "type": "number"
        },
        "offset": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Offset for data on a ratio scale.",
          "title": "Offset"
        }
      },
      "title": "model.v0_5.IntervalOrRatioDataDescr",
      "type": "object"
    },
    "NominalOrOrdinalDataDescr": {
      "additionalProperties": false,
      "properties": {
        "values": {
          "anyOf": [
            {
              "items": {
                "type": "integer"
              },
              "minItems": 1,
              "type": "array"
            },
            {
              "items": {
                "type": "number"
              },
              "minItems": 1,
              "type": "array"
            },
            {
              "items": {
                "type": "boolean"
              },
              "minItems": 1,
              "type": "array"
            },
            {
              "items": {
                "type": "string"
              },
              "minItems": 1,
              "type": "array"
            }
          ],
          "description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
          "title": "Values"
        },
        "type": {
          "default": "uint8",
          "enum": [
            "float32",
            "float64",
            "uint8",
            "int8",
            "uint16",
            "int16",
            "uint32",
            "int32",
            "uint64",
            "int64",
            "bool"
          ],
          "examples": [
            "float32",
            "uint8",
            "uint16",
            "int64",
            "bool"
          ],
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "const": "arbitrary unit",
              "type": "string"
            },
            {
              "description": "An SI unit",
              "minLength": 1,
              "pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
              "title": "SiUnit",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        }
      },
      "required": [
        "values"
      ],
      "title": "model.v0_5.NominalOrOrdinalDataDescr",
      "type": "object"
    },
    "ParameterizedSize": {
      "additionalProperties": false,
      "description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n  This allows to adjust the axis size more generically.",
      "properties": {
        "min": {
          "exclusiveMinimum": 0,
          "title": "Min",
          "type": "integer"
        },
        "step": {
          "exclusiveMinimum": 0,
          "title": "Step",
          "type": "integer"
        }
      },
      "required": [
        "min",
        "step"
      ],
      "title": "model.v0_5.ParameterizedSize",
      "type": "object"
    },
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    },
    "ScaleLinearAlongAxisKwargs": {
      "additionalProperties": false,
      "description": "Key word arguments for [ScaleLinearDescr][]",
      "properties": {
        "axis": {
          "description": "The axis of gain and offset values.",
          "examples": [
            "channel"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "gain": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "items": {
                "type": "number"
              },
              "minItems": 1,
              "type": "array"
            }
          ],
          "default": 1.0,
          "description": "multiplicative factor",
          "title": "Gain"
        },
        "offset": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "items": {
                "type": "number"
              },
              "minItems": 1,
              "type": "array"
            }
          ],
          "default": 0.0,
          "description": "additive term",
          "title": "Offset"
        }
      },
      "required": [
        "axis"
      ],
      "title": "model.v0_5.ScaleLinearAlongAxisKwargs",
      "type": "object"
    },
    "ScaleLinearDescr": {
      "additionalProperties": false,
      "description": "Fixed linear scaling.\n\nExamples:\n  1. Scale with scalar gain and offset\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_linear\n        kwargs:\n          gain: 2.0\n          offset: 3.0\n    ```\n    - in Python:\n\n    >>> preprocessing = [\n    ...     ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0))\n    ... ]\n\n  2. Independent scaling along an axis\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_linear\n        kwargs:\n          axis: 'channel'\n          gain: [1.0, 2.0, 3.0]\n    ```\n    - in Python:\n\n    >>> preprocessing = [\n    ...     ScaleLinearDescr(\n    ...         kwargs=ScaleLinearAlongAxisKwargs(\n    ...             axis=AxisId(\"channel\"),\n    ...             gain=[1.0, 2.0, 3.0],\n    ...         )\n    ...     )\n    ... ]",
      "properties": {
        "id": {
          "const": "scale_linear",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "anyOf": [
            {
              "$ref": "#/$defs/ScaleLinearKwargs"
            },
            {
              "$ref": "#/$defs/ScaleLinearAlongAxisKwargs"
            }
          ],
          "title": "Kwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.ScaleLinearDescr",
      "type": "object"
    },
    "ScaleLinearKwargs": {
      "additionalProperties": false,
      "description": "Key word arguments for [ScaleLinearDescr][]",
      "properties": {
        "gain": {
          "default": 1.0,
          "description": "multiplicative factor",
          "title": "Gain",
          "type": "number"
        },
        "offset": {
          "default": 0.0,
          "description": "additive term",
          "title": "Offset",
          "type": "number"
        }
      },
      "title": "model.v0_5.ScaleLinearKwargs",
      "type": "object"
    },
    "ScaleRangeDescr": {
      "additionalProperties": false,
      "description": "Scale with percentiles.\n\nExamples:\n1. Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_range\n        kwargs:\n          axes: ['y', 'x']\n          max_percentile: 99.8\n          min_percentile: 5.0\n    ```\n    - in Python\n\n    >>> preprocessing = [\n    ...     ScaleRangeDescr(\n    ...         kwargs=ScaleRangeKwargs(\n    ...           axes= (AxisId('y'), AxisId('x')),\n    ...           max_percentile= 99.8,\n    ...           min_percentile= 5.0,\n    ...         )\n    ...     )\n    ... ]\n\n  2. Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_range\n        kwargs:\n          axes: ['y', 'x']\n          max_percentile: 99.8\n          min_percentile: 5.0\n       - id: clip\n         kwargs:\n          min: 0.0\n          max: 1.0\n    ```\n    - in Python\n\n    >>> preprocessing = [\n    ...   ScaleRangeDescr(\n    ...     kwargs=ScaleRangeKwargs(\n    ...       axes= (AxisId('y'), AxisId('x')),\n    ...       max_percentile= 99.8,\n    ...       min_percentile= 5.0,\n    ...     )\n    ...   ),\n    ...   ClipDescr(\n    ...     kwargs=ClipKwargs(\n    ...       min=0.0,\n    ...       max=1.0,\n    ...     )\n    ...   ),\n    ... ]",
      "properties": {
        "id": {
          "const": "scale_range",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/ScaleRangeKwargs"
        }
      },
      "required": [
        "id"
      ],
      "title": "model.v0_5.ScaleRangeDescr",
      "type": "object"
    },
    "ScaleRangeKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ScaleRangeDescr][]\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
      "properties": {
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        },
        "min_percentile": {
          "default": 0.0,
          "description": "The lower percentile used to determine the value to align with zero.",
          "exclusiveMaximum": 100,
          "minimum": 0,
          "title": "Min Percentile",
          "type": "number"
        },
        "max_percentile": {
          "default": 100.0,
          "description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
          "exclusiveMinimum": 1,
          "maximum": 100,
          "title": "Max Percentile",
          "type": "number"
        },
        "eps": {
          "default": 1e-06,
          "description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
          "exclusiveMinimum": 0,
          "maximum": 0.1,
          "title": "Eps",
          "type": "number"
        },
        "reference_tensor": {
          "anyOf": [
            {
              "maxLength": 32,
              "minLength": 1,
              "title": "TensorId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "ID of the unprocessed input tensor to compute the percentiles from.\nDefault: The tensor itself.",
          "title": "Reference Tensor"
        }
      },
      "title": "model.v0_5.ScaleRangeKwargs",
      "type": "object"
    },
    "SigmoidDescr": {
      "additionalProperties": false,
      "description": "The logistic sigmoid function, a.k.a. expit function.\n\nExamples:\n- in YAML\n    ```yaml\n    postprocessing:\n      - id: sigmoid\n    ```\n- in Python:\n\n    >>> postprocessing = [SigmoidDescr()]",
      "properties": {
        "id": {
          "const": "sigmoid",
          "title": "Id",
          "type": "string"
        }
      },
      "required": [
        "id"
      ],
      "title": "model.v0_5.SigmoidDescr",
      "type": "object"
    },
    "SizeReference": {
      "additionalProperties": false,
      "description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n    `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n...     id=AxisId(\"h\"),\n...     size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n...     unit=\"millimeter\",\n...     scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
      "properties": {
        "tensor_id": {
          "description": "tensor id of the reference axis",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axis_id": {
          "description": "axis id of the reference axis",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "offset": {
          "default": 0,
          "title": "Offset",
          "type": "integer"
        }
      },
      "required": [
        "tensor_id",
        "axis_id"
      ],
      "title": "model.v0_5.SizeReference",
      "type": "object"
    },
    "SoftmaxDescr": {
      "additionalProperties": false,
      "description": "The softmax function.\n\nExamples:\n- in YAML\n    ```yaml\n    postprocessing:\n      - id: softmax\n        kwargs:\n          axis: channel\n    ```\n- in Python:\n\n    >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId(\"channel\")))]",
      "properties": {
        "id": {
          "const": "softmax",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/SoftmaxKwargs"
        }
      },
      "required": [
        "id"
      ],
      "title": "model.v0_5.SoftmaxDescr",
      "type": "object"
    },
    "SoftmaxKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [SoftmaxDescr][]",
      "properties": {
        "axis": {
          "default": "channel",
          "description": "The axis to apply the softmax function along.\nNote:\n    Defaults to 'channel' axis\n    (which may not exist, in which case\n    a different axis id has to be specified).",
          "examples": [
            "channel"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        }
      },
      "title": "model.v0_5.SoftmaxKwargs",
      "type": "object"
    },
    "SpaceInputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/ParameterizedSize"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
          "examples": [
            10,
            {
              "min": 32,
              "step": 16
            },
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "x",
          "examples": [
            "x",
            "y",
            "z"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "space",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attometer",
                "angstrom",
                "centimeter",
                "decimeter",
                "exameter",
                "femtometer",
                "foot",
                "gigameter",
                "hectometer",
                "inch",
                "kilometer",
                "megameter",
                "meter",
                "micrometer",
                "mile",
                "millimeter",
                "nanometer",
                "parsec",
                "petameter",
                "picometer",
                "terameter",
                "yard",
                "yoctometer",
                "yottameter",
                "zeptometer",
                "zettameter"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        },
        "concatenable": {
          "default": false,
          "description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
          "title": "Concatenable",
          "type": "boolean"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.SpaceInputAxis",
      "type": "object"
    },
    "TimeInputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/ParameterizedSize"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
          "examples": [
            10,
            {
              "min": 32,
              "step": 16
            },
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "time",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "time",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attosecond",
                "centisecond",
                "day",
                "decisecond",
                "exasecond",
                "femtosecond",
                "gigasecond",
                "hectosecond",
                "hour",
                "kilosecond",
                "megasecond",
                "microsecond",
                "millisecond",
                "minute",
                "nanosecond",
                "petasecond",
                "picosecond",
                "second",
                "terasecond",
                "yoctosecond",
                "yottasecond",
                "zeptosecond",
                "zettasecond"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        },
        "concatenable": {
          "default": false,
          "description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
          "title": "Concatenable",
          "type": "boolean"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.TimeInputAxis",
      "type": "object"
    },
    "ZeroMeanUnitVarianceDescr": {
      "additionalProperties": false,
      "description": "Subtract mean and divide by variance.\n\nExamples:\n    Subtract tensor mean and variance\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: zero_mean_unit_variance\n    ```\n    - in Python\n    >>> preprocessing = [ZeroMeanUnitVarianceDescr()]",
      "properties": {
        "id": {
          "const": "zero_mean_unit_variance",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
        }
      },
      "required": [
        "id"
      ],
      "title": "model.v0_5.ZeroMeanUnitVarianceDescr",
      "type": "object"
    },
    "ZeroMeanUnitVarianceKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ZeroMeanUnitVarianceDescr][]",
      "properties": {
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        },
        "eps": {
          "default": 1e-06,
          "description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
          "exclusiveMinimum": 0,
          "maximum": 0.1,
          "title": "Eps",
          "type": "number"
        }
      },
      "title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "properties": {
    "id": {
      "default": "input",
      "description": "Input tensor id.\nNo duplicates are allowed across all inputs and outputs.",
      "maxLength": 32,
      "minLength": 1,
      "title": "TensorId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "free text description",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "axes": {
      "description": "tensor axes",
      "items": {
        "discriminator": {
          "mapping": {
            "batch": "#/$defs/BatchAxis",
            "channel": "#/$defs/ChannelAxis",
            "index": "#/$defs/IndexInputAxis",
            "space": "#/$defs/SpaceInputAxis",
            "time": "#/$defs/TimeInputAxis"
          },
          "propertyName": "type"
        },
        "oneOf": [
          {
            "$ref": "#/$defs/BatchAxis"
          },
          {
            "$ref": "#/$defs/ChannelAxis"
          },
          {
            "$ref": "#/$defs/IndexInputAxis"
          },
          {
            "$ref": "#/$defs/TimeInputAxis"
          },
          {
            "$ref": "#/$defs/SpaceInputAxis"
          }
        ]
      },
      "minItems": 1,
      "title": "Axes",
      "type": "array"
    },
    "test_tensor": {
      "anyOf": [
        {
          "$ref": "#/$defs/FileDescr"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
    },
    "sample_tensor": {
      "anyOf": [
        {
          "$ref": "#/$defs/FileDescr"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
    },
    "data": {
      "anyOf": [
        {
          "$ref": "#/$defs/NominalOrOrdinalDataDescr"
        },
        {
          "$ref": "#/$defs/IntervalOrRatioDataDescr"
        },
        {
          "items": {
            "anyOf": [
              {
                "$ref": "#/$defs/NominalOrOrdinalDataDescr"
              },
              {
                "$ref": "#/$defs/IntervalOrRatioDataDescr"
              }
            ]
          },
          "minItems": 1,
          "type": "array"
        }
      ],
      "default": {
        "type": "float32",
        "range": [
          null,
          null
        ],
        "unit": "arbitrary unit",
        "scale": 1.0,
        "offset": null
      },
      "description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
      "title": "Data"
    },
    "optional": {
      "default": false,
      "description": "indicates that this tensor may be `None`",
      "title": "Optional",
      "type": "boolean"
    },
    "preprocessing": {
      "description": "Description of how this input should be preprocessed.\n\nnotes:\n- If preprocessing does not start with an 'ensure_dtype' entry, it is added\n  to ensure an input tensor's data type matches the input tensor's data description.\n- If preprocessing does not end with an 'ensure_dtype' or 'binarize' entry, an\n  'ensure_dtype' step is added to ensure preprocessing steps are not unintentionally\n  changing the data type.",
      "items": {
        "discriminator": {
          "mapping": {
            "binarize": "#/$defs/BinarizeDescr",
            "clip": "#/$defs/ClipDescr",
            "ensure_dtype": "#/$defs/EnsureDtypeDescr",
            "fixed_zero_mean_unit_variance": "#/$defs/FixedZeroMeanUnitVarianceDescr",
            "scale_linear": "#/$defs/ScaleLinearDescr",
            "scale_range": "#/$defs/ScaleRangeDescr",
            "sigmoid": "#/$defs/SigmoidDescr",
            "softmax": "#/$defs/SoftmaxDescr",
            "zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
          },
          "propertyName": "id"
        },
        "oneOf": [
          {
            "$ref": "#/$defs/BinarizeDescr"
          },
          {
            "$ref": "#/$defs/ClipDescr"
          },
          {
            "$ref": "#/$defs/EnsureDtypeDescr"
          },
          {
            "$ref": "#/$defs/FixedZeroMeanUnitVarianceDescr"
          },
          {
            "$ref": "#/$defs/ScaleLinearDescr"
          },
          {
            "$ref": "#/$defs/ScaleRangeDescr"
          },
          {
            "$ref": "#/$defs/SigmoidDescr"
          },
          {
            "$ref": "#/$defs/SoftmaxDescr"
          },
          {
            "$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
          }
        ]
      },
      "title": "Preprocessing",
      "type": "array"
    }
  },
  "required": [
    "axes"
  ],
  "title": "model.v0_5.InputTensorDescr",
  "type": "object"
}

Fields:

Validators:

  • _validate_axesaxes
  • _validate_sample_tensor
  • _check_data_type_across_channelsdata
  • _check_data_matches_channelaxis
  • _validate_preprocessing_kwargs

axes pydantic-field ¤

axes: NotEmpty[Sequence[IO_AxisT]]

tensor axes

data pydantic-field ¤

data: Union[
    TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]
]

Description of the tensor's data values, optionally per channel. If specified per channel, the data type needs to match across channels.

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

free text description

dtype property ¤

dtype: Literal[
    "float32",
    "float64",
    "uint8",
    "int8",
    "uint16",
    "int16",
    "uint32",
    "int32",
    "uint64",
    "int64",
    "bool",
]

dtype as specified under data.type or data[i].type

id pydantic-field ¤

Input tensor id. No duplicates are allowed across all inputs and outputs.

optional pydantic-field ¤

optional: bool = False

indicates that this tensor may be None

preprocessing pydantic-field ¤

preprocessing: List[PreprocessingDescr]

Description of how this input should be preprocessed.

notes: - If preprocessing does not start with an 'ensure_dtype' entry, it is added to ensure an input tensor's data type matches the input tensor's data description. - If preprocessing does not end with an 'ensure_dtype' or 'binarize' entry, an 'ensure_dtype' step is added to ensure preprocessing steps are not unintentionally changing the data type.

sample_tensor pydantic-field ¤

sample_tensor: FAIR[Optional[FileDescr_]] = None

A sample tensor to illustrate a possible input/output for the model, The sample image primarily serves to inform a human user about an example use case and is typically stored as .hdf5, .png or .tiff. It has to be readable by the imageio library (numpy's .npy format is not supported). The image dimensionality has to match the number of axes specified in this tensor description.

shape property ¤

shape

test_tensor pydantic-field ¤

test_tensor: FAIR[Optional[FileDescr_]] = None

An example tensor to use for testing. Using the model with the test input tensors is expected to yield the test output tensors. Each test tensor has be a an ndarray in the numpy.lib file format. The file extension must be '.npy'.

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get_axis_sizes_for_array ¤

get_axis_sizes_for_array(
    array: NDArray[Any],
) -> Dict[AxisId, int]
Source code in src/bioimageio/spec/model/v0_5.py
1834
1835
1836
1837
1838
1839
1840
def get_axis_sizes_for_array(self, array: NDArray[Any]) -> Dict[AxisId, int]:
    if len(array.shape) != len(self.axes):
        raise ValueError(
            f"Dimension mismatch: array shape {array.shape} (#{len(array.shape)})"
            + f" incompatible with {len(self.axes)} axes."
        )
    return {a.id: array.shape[i] for i, a in enumerate(self.axes)}

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

IntervalOrRatioDataDescr pydantic-model ¤

Bases: Node

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "type": {
      "default": "float32",
      "enum": [
        "float32",
        "float64",
        "uint8",
        "int8",
        "uint16",
        "int16",
        "uint32",
        "int32",
        "uint64",
        "int64"
      ],
      "examples": [
        "float32",
        "float64",
        "uint8",
        "uint16"
      ],
      "title": "Type",
      "type": "string"
    },
    "range": {
      "default": [
        null,
        null
      ],
      "description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
      "maxItems": 2,
      "minItems": 2,
      "prefixItems": [
        {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ]
        },
        {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ]
        }
      ],
      "title": "Range",
      "type": "array"
    },
    "unit": {
      "anyOf": [
        {
          "const": "arbitrary unit",
          "type": "string"
        },
        {
          "description": "An SI unit",
          "minLength": 1,
          "pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
          "title": "SiUnit",
          "type": "string"
        }
      ],
      "default": "arbitrary unit",
      "title": "Unit"
    },
    "scale": {
      "default": 1.0,
      "description": "Scale for data on an interval (or ratio) scale.",
      "title": "Scale",
      "type": "number"
    },
    "offset": {
      "anyOf": [
        {
          "type": "number"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Offset for data on a ratio scale.",
      "title": "Offset"
    }
  },
  "title": "model.v0_5.IntervalOrRatioDataDescr",
  "type": "object"
}

Fields:

Validators:

  • _replace_inf

offset pydantic-field ¤

offset: Optional[float] = None

Offset for data on a ratio scale.

range pydantic-field ¤

range: Tuple[Optional[float], Optional[float]] = (
    None,
    None,
)

Tuple (minimum, maximum) specifying the allowed range of the data in this tensor. None corresponds to min/max of what can be expressed by type.

scale pydantic-field ¤

scale: float = 1.0

Scale for data on an interval (or ratio) scale.

type pydantic-field ¤

type: Annotated[
    IntervalOrRatioDType,
    Field(
        examples=["float32", "float64", "uint8", "uint16"]
    ),
] = "float32"

unit pydantic-field ¤

unit: Union[Literal["arbitrary unit"], SiUnit] = (
    "arbitrary unit"
)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

KerasHdf5WeightsDescr pydantic-model ¤

Bases: WeightsEntryDescrBase

Show JSON schema:
{
  "$defs": {
    "Author": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "title": "Name",
          "type": "string"
        },
        "github_user": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Github User"
        }
      },
      "required": [
        "name"
      ],
      "title": "generic.v0_3.Author",
      "type": "object"
    },
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    },
    "Version": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        }
      ],
      "description": "wraps a packaging.version.Version instance for validation in pydantic models",
      "title": "Version"
    }
  },
  "additionalProperties": false,
  "properties": {
    "source": {
      "anyOf": [
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "$ref": "#/$defs/RelativeFilePath"
        },
        {
          "format": "file-path",
          "title": "FilePath",
          "type": "string"
        }
      ],
      "description": "Source of the weights file.",
      "title": "Source"
    },
    "sha256": {
      "anyOf": [
        {
          "description": "A SHA-256 hash value",
          "maxLength": 64,
          "minLength": 64,
          "title": "Sha256",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "SHA256 hash value of the **source** file.",
      "title": "Sha256"
    },
    "authors": {
      "anyOf": [
        {
          "items": {
            "$ref": "#/$defs/Author"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
      "title": "Authors"
    },
    "parent": {
      "anyOf": [
        {
          "enum": [
            "keras_hdf5",
            "keras_v3",
            "onnx",
            "pytorch_state_dict",
            "tensorflow_js",
            "tensorflow_saved_model_bundle",
            "torchscript"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
      "examples": [
        "pytorch_state_dict"
      ],
      "title": "Parent"
    },
    "comment": {
      "default": "",
      "description": "A comment about this weights entry, for example how these weights were created.",
      "title": "Comment",
      "type": "string"
    },
    "tensorflow_version": {
      "$ref": "#/$defs/Version",
      "description": "TensorFlow version used to create these weights."
    }
  },
  "required": [
    "source",
    "tensorflow_version"
  ],
  "title": "model.v0_5.KerasHdf5WeightsDescr",
  "type": "object"
}

Fields:

Validators:

  • _validate_sha256
  • _validate

authors pydantic-field ¤

authors: Optional[List[Author]] = None

Authors Either the person(s) that have trained this model resulting in the original weights file. (If this is the initial weights entry, i.e. it does not have a parent) Or the person(s) who have converted the weights to this weights format. (If this is a child weight, i.e. it has a parent field)

comment pydantic-field ¤

comment: str = ''

A comment about this weights entry, for example how these weights were created.

parent pydantic-field ¤

parent: Annotated[
    Optional[WeightsFormat],
    Field(examples=["pytorch_state_dict"]),
] = None

The source weights these weights were converted from. For example, if a model's weights were converted from the pytorch_state_dict format to torchscript, The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights. All weight entries except one (the initial set of weights resulting from training the model), need to have this field.

sha256 pydantic-field ¤

sha256: Optional[Sha256] = None

SHA256 hash value of the source file.

source pydantic-field ¤

source: Annotated[
    FileSource, AfterValidator(wo_special_file_name)
]

Source of the weights file.

suffix property ¤

suffix: str

tensorflow_version pydantic-field ¤

tensorflow_version: Version

TensorFlow version used to create these weights.

type class-attribute ¤

type: WeightsFormat = 'keras_hdf5'

weights_format_name class-attribute ¤

weights_format_name: str = 'Keras HDF5'

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

download ¤

download(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

alias for .get_reader

Source code in src/bioimageio/spec/_internal/io.py
319
320
321
322
323
324
325
326
327
def download(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """alias for `.get_reader`"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

get_reader ¤

get_reader(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

open the file source (download if needed)

Source code in src/bioimageio/spec/_internal/io.py
309
310
311
312
313
314
315
316
317
def get_reader(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """open the file source (download if needed)"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

validate_sha256 ¤

validate_sha256(force_recompute: bool = False) -> None

validate the sha256 hash value of the source file

Source code in src/bioimageio/spec/_internal/io.py
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def validate_sha256(self, force_recompute: bool = False) -> None:
    """validate the sha256 hash value of the **source** file"""
    context = get_validation_context()
    src_str = str(self.source)
    if force_recompute:
        actual_sha = None
    else:
        actual_sha = context.known_files.get(src_str)

    if actual_sha is None:
        if context.perform_io_checks or force_recompute:
            reader = get_reader(self.source, sha256=self.sha256)
            if force_recompute:
                actual_sha = get_sha256(reader)
            else:
                actual_sha = reader.sha256

            context.known_files[src_str] = actual_sha
        elif context.known_files and src_str not in context.known_files:
            # perform_io_checks is False, but known files were given,
            # so we expect all file references to be in there
            raise ValueError(f"File {src_str} not found in `known_files`.")

    if actual_sha is None or self.sha256 == actual_sha:
        return
    elif self.sha256 is None or context.update_hashes:
        self.sha256 = actual_sha
    elif self.sha256 != actual_sha:
        raise ValueError(
            f"Sha256 mismatch for {self.source}. Expected {self.sha256}, got "
            + f"{actual_sha}. Update expected `sha256` or point to the matching "
            + "file."
        )

KerasV3WeightsDescr pydantic-model ¤

Bases: WeightsEntryDescrBase

Show JSON schema:
{
  "$defs": {
    "Author": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "title": "Name",
          "type": "string"
        },
        "github_user": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Github User"
        }
      },
      "required": [
        "name"
      ],
      "title": "generic.v0_3.Author",
      "type": "object"
    },
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    },
    "Version": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        }
      ],
      "description": "wraps a packaging.version.Version instance for validation in pydantic models",
      "title": "Version"
    }
  },
  "additionalProperties": false,
  "properties": {
    "source": {
      "anyOf": [
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "$ref": "#/$defs/RelativeFilePath"
        },
        {
          "format": "file-path",
          "title": "FilePath",
          "type": "string"
        }
      ],
      "description": "Source of the .keras weights file.",
      "title": "Source"
    },
    "sha256": {
      "anyOf": [
        {
          "description": "A SHA-256 hash value",
          "maxLength": 64,
          "minLength": 64,
          "title": "Sha256",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "SHA256 hash value of the **source** file.",
      "title": "Sha256"
    },
    "authors": {
      "anyOf": [
        {
          "items": {
            "$ref": "#/$defs/Author"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
      "title": "Authors"
    },
    "parent": {
      "anyOf": [
        {
          "enum": [
            "keras_hdf5",
            "keras_v3",
            "onnx",
            "pytorch_state_dict",
            "tensorflow_js",
            "tensorflow_saved_model_bundle",
            "torchscript"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
      "examples": [
        "pytorch_state_dict"
      ],
      "title": "Parent"
    },
    "comment": {
      "default": "",
      "description": "A comment about this weights entry, for example how these weights were created.",
      "title": "Comment",
      "type": "string"
    },
    "keras_version": {
      "$ref": "#/$defs/Version",
      "description": "Keras version used to create these weights.",
      "ge": 3
    },
    "backend": {
      "description": "Keras backend used to create these weights.",
      "maxItems": 2,
      "minItems": 2,
      "prefixItems": [
        {
          "enum": [
            "tensorflow",
            "jax",
            "torch"
          ],
          "type": "string"
        },
        {
          "$ref": "#/$defs/Version"
        }
      ],
      "title": "Backend",
      "type": "array"
    }
  },
  "required": [
    "source",
    "keras_version",
    "backend"
  ],
  "title": "model.v0_5.KerasV3WeightsDescr",
  "type": "object"
}

Fields:

Validators:

  • _validate_sha256
  • _validate

authors pydantic-field ¤

authors: Optional[List[Author]] = None

Authors Either the person(s) that have trained this model resulting in the original weights file. (If this is the initial weights entry, i.e. it does not have a parent) Or the person(s) who have converted the weights to this weights format. (If this is a child weight, i.e. it has a parent field)

backend pydantic-field ¤

backend: Tuple[
    Literal["tensorflow", "jax", "torch"], Version
]

Keras backend used to create these weights.

comment pydantic-field ¤

comment: str = ''

A comment about this weights entry, for example how these weights were created.

keras_version pydantic-field ¤

keras_version: Annotated[Version, Ge(Version(3))]

Keras version used to create these weights.

parent pydantic-field ¤

parent: Annotated[
    Optional[WeightsFormat],
    Field(examples=["pytorch_state_dict"]),
] = None

The source weights these weights were converted from. For example, if a model's weights were converted from the pytorch_state_dict format to torchscript, The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights. All weight entries except one (the initial set of weights resulting from training the model), need to have this field.

sha256 pydantic-field ¤

sha256: Optional[Sha256] = None

SHA256 hash value of the source file.

source pydantic-field ¤

source: Annotated[
    FileSource,
    AfterValidator(wo_special_file_name),
    WithSuffix(".keras", case_sensitive=True),
]

Source of the .keras weights file.

suffix property ¤

suffix: str

type class-attribute ¤

type: WeightsFormat = 'keras_v3'

weights_format_name class-attribute ¤

weights_format_name: str = 'Keras v3'

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

download ¤

download(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

alias for .get_reader

Source code in src/bioimageio/spec/_internal/io.py
319
320
321
322
323
324
325
326
327
def download(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """alias for `.get_reader`"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

get_reader ¤

get_reader(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

open the file source (download if needed)

Source code in src/bioimageio/spec/_internal/io.py
309
310
311
312
313
314
315
316
317
def get_reader(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """open the file source (download if needed)"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

validate_sha256 ¤

validate_sha256(force_recompute: bool = False) -> None

validate the sha256 hash value of the source file

Source code in src/bioimageio/spec/_internal/io.py
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def validate_sha256(self, force_recompute: bool = False) -> None:
    """validate the sha256 hash value of the **source** file"""
    context = get_validation_context()
    src_str = str(self.source)
    if force_recompute:
        actual_sha = None
    else:
        actual_sha = context.known_files.get(src_str)

    if actual_sha is None:
        if context.perform_io_checks or force_recompute:
            reader = get_reader(self.source, sha256=self.sha256)
            if force_recompute:
                actual_sha = get_sha256(reader)
            else:
                actual_sha = reader.sha256

            context.known_files[src_str] = actual_sha
        elif context.known_files and src_str not in context.known_files:
            # perform_io_checks is False, but known files were given,
            # so we expect all file references to be in there
            raise ValueError(f"File {src_str} not found in `known_files`.")

    if actual_sha is None or self.sha256 == actual_sha:
        return
    elif self.sha256 is None or context.update_hashes:
        self.sha256 = actual_sha
    elif self.sha256 != actual_sha:
        raise ValueError(
            f"Sha256 mismatch for {self.source}. Expected {self.sha256}, got "
            + f"{actual_sha}. Update expected `sha256` or point to the matching "
            + "file."
        )

LicenseId ¤

Bases: ValidatedString


              flowchart TD
              bioimageio.spec.model.v0_5.LicenseId[LicenseId]
              bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]

                              bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.LicenseId
                


              click bioimageio.spec.model.v0_5.LicenseId href "" "bioimageio.spec.model.v0_5.LicenseId"
              click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
            

Methods:

Name Description
__get_pydantic_core_schema__
__get_pydantic_json_schema__
__new__

Attributes:

Name Type Description
root_model Type[RootModel[Any]]

the pydantic root model to validate the string

root_model class-attribute ¤

root_model: Type[RootModel[Any]] = RootModel[
    LicenseIdLiteral
]

the pydantic root model to validate the string

__get_pydantic_core_schema__ classmethod ¤

__get_pydantic_core_schema__(
    source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29
30
31
32
33
@classmethod
def __get_pydantic_core_schema__(
    cls, source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema:
    return no_info_after_validator_function(cls, handler(str))

__get_pydantic_json_schema__ classmethod ¤

__get_pydantic_json_schema__(
    core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35
36
37
38
39
40
41
42
43
44
@classmethod
def __get_pydantic_json_schema__(
    cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue:
    json_schema = cls.root_model.model_json_schema(mode=handler.mode)
    json_schema["title"] = cls.__name__.strip("_")
    if cls.__doc__:
        json_schema["description"] = cls.__doc__

    return json_schema

__new__ ¤

__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19
20
21
22
23
def __new__(cls, object: object):
    _validated = cls.root_model.model_validate(object).root
    self = super().__new__(cls, _validated)
    self._validated = _validated
    return self._after_validator()

LinkedDataset pydantic-model ¤

Bases: LinkedResourceBase

Reference to a bioimage.io dataset.

Show JSON schema:
{
  "$defs": {
    "Version": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        }
      ],
      "description": "wraps a packaging.version.Version instance for validation in pydantic models",
      "title": "Version"
    }
  },
  "additionalProperties": false,
  "description": "Reference to a bioimage.io dataset.",
  "properties": {
    "version": {
      "anyOf": [
        {
          "$ref": "#/$defs/Version"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The version of the linked resource following SemVer 2.0."
    },
    "id": {
      "description": "A valid dataset `id` from the bioimage.io collection.",
      "minLength": 1,
      "title": "DatasetId",
      "type": "string"
    }
  },
  "required": [
    "id"
  ],
  "title": "dataset.v0_3.LinkedDataset",
  "type": "object"
}

Fields:

Validators:

  • _remove_version_number

id pydantic-field ¤

A valid dataset id from the bioimage.io collection.

version pydantic-field ¤

version: Optional[Version] = None

The version of the linked resource following SemVer 2.0.

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

LinkedModel pydantic-model ¤

Bases: LinkedResourceBase

Reference to a bioimage.io model.

Show JSON schema:
{
  "$defs": {
    "Version": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        }
      ],
      "description": "wraps a packaging.version.Version instance for validation in pydantic models",
      "title": "Version"
    }
  },
  "additionalProperties": false,
  "description": "Reference to a bioimage.io model.",
  "properties": {
    "version": {
      "anyOf": [
        {
          "$ref": "#/$defs/Version"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The version of the linked resource following SemVer 2.0."
    },
    "id": {
      "description": "A valid model `id` from the bioimage.io collection.",
      "minLength": 1,
      "title": "ModelId",
      "type": "string"
    }
  },
  "required": [
    "id"
  ],
  "title": "model.v0_5.LinkedModel",
  "type": "object"
}

Fields:

Validators:

  • _remove_version_number

id pydantic-field ¤

id: ModelId

A valid model id from the bioimage.io collection.

version pydantic-field ¤

version: Optional[Version] = None

The version of the linked resource following SemVer 2.0.

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

LinkedResource pydantic-model ¤

Bases: LinkedResourceBase

Reference to a bioimage.io resource

Show JSON schema:
{
  "$defs": {
    "Version": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        }
      ],
      "description": "wraps a packaging.version.Version instance for validation in pydantic models",
      "title": "Version"
    }
  },
  "additionalProperties": false,
  "description": "Reference to a bioimage.io resource",
  "properties": {
    "version": {
      "anyOf": [
        {
          "$ref": "#/$defs/Version"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The version of the linked resource following SemVer 2.0."
    },
    "id": {
      "description": "A valid resource `id` from the official bioimage.io collection.",
      "minLength": 1,
      "title": "ResourceId",
      "type": "string"
    }
  },
  "required": [
    "id"
  ],
  "title": "generic.v0_3.LinkedResource",
  "type": "object"
}

Fields:

Validators:

  • _remove_version_number

id pydantic-field ¤

A valid resource id from the official bioimage.io collection.

version pydantic-field ¤

version: Optional[Version] = None

The version of the linked resource following SemVer 2.0.

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

Maintainer pydantic-model ¤

Bases: _Maintainer_v0_2

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "affiliation": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Affiliation",
      "title": "Affiliation"
    },
    "email": {
      "anyOf": [
        {
          "format": "email",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Email",
      "title": "Email"
    },
    "orcid": {
      "anyOf": [
        {
          "description": "An ORCID identifier, see https://orcid.org/",
          "title": "OrcidId",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
      "examples": [
        "0000-0001-2345-6789"
      ],
      "title": "Orcid"
    },
    "name": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Name"
    },
    "github_user": {
      "title": "Github User",
      "type": "string"
    }
  },
  "required": [
    "github_user"
  ],
  "title": "generic.v0_3.Maintainer",
  "type": "object"
}

Fields:

Validators:

affiliation pydantic-field ¤

affiliation: Optional[str] = None

Affiliation

email pydantic-field ¤

email: Optional[EmailStr] = None

Email

github_user pydantic-field ¤

github_user: str

name pydantic-field ¤

name: Optional[Annotated[str, Predicate(_has_no_slash)]] = (
    None
)

orcid pydantic-field ¤

orcid: Annotated[
    Optional[OrcidId],
    Field(examples=["0000-0001-2345-6789"]),
] = None

An ORCID iD in hyphenated groups of 4 digits, (and valid as per ISO 7064 11,2.)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

validate_github_user pydantic-validator ¤

validate_github_user(value: str)
Source code in src/bioimageio/spec/generic/v0_3.py
140
141
142
@field_validator("github_user", mode="after")
def validate_github_user(cls, value: str):
    return validate_github_user(value)

ModelDescr pydantic-model ¤

Bases: GenericModelDescrBase

Specification of the fields used in a bioimage.io-compliant RDF to describe AI models with pretrained weights. These fields are typically stored in a YAML file which we call a model resource description file (model RDF).

Show JSON schema:
{
  "$defs": {
    "ArchitectureFromFileDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "Architecture source file",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "callable": {
          "description": "Identifier of the callable that returns a torch.nn.Module instance.",
          "examples": [
            "MyNetworkClass",
            "get_my_model"
          ],
          "minLength": 1,
          "title": "Identifier",
          "type": "string"
        },
        "kwargs": {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "description": "key word arguments for the `callable`",
          "title": "Kwargs",
          "type": "object"
        }
      },
      "required": [
        "source",
        "callable"
      ],
      "title": "model.v0_5.ArchitectureFromFileDescr",
      "type": "object"
    },
    "ArchitectureFromLibraryDescr": {
      "additionalProperties": false,
      "properties": {
        "callable": {
          "description": "Identifier of the callable that returns a torch.nn.Module instance.",
          "examples": [
            "MyNetworkClass",
            "get_my_model"
          ],
          "minLength": 1,
          "title": "Identifier",
          "type": "string"
        },
        "kwargs": {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "description": "key word arguments for the `callable`",
          "title": "Kwargs",
          "type": "object"
        },
        "import_from": {
          "description": "Where to import the callable from, i.e. `from <import_from> import <callable>`",
          "title": "Import From",
          "type": "string"
        }
      },
      "required": [
        "callable",
        "import_from"
      ],
      "title": "model.v0_5.ArchitectureFromLibraryDescr",
      "type": "object"
    },
    "AttachmentsDescr": {
      "additionalProperties": true,
      "properties": {
        "files": {
          "description": "File attachments",
          "items": {
            "anyOf": [
              {
                "description": "A URL with the HTTP or HTTPS scheme.",
                "format": "uri",
                "maxLength": 2083,
                "minLength": 1,
                "title": "HttpUrl",
                "type": "string"
              },
              {
                "$ref": "#/$defs/RelativeFilePath"
              },
              {
                "format": "file-path",
                "title": "FilePath",
                "type": "string"
              }
            ]
          },
          "title": "Files",
          "type": "array"
        }
      },
      "title": "generic.v0_2.AttachmentsDescr",
      "type": "object"
    },
    "BadgeDescr": {
      "additionalProperties": false,
      "description": "A custom badge",
      "properties": {
        "label": {
          "description": "badge label to display on hover",
          "examples": [
            "Open in Colab"
          ],
          "title": "Label",
          "type": "string"
        },
        "icon": {
          "anyOf": [
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "badge icon (included in bioimage.io package if not a URL)",
          "examples": [
            "https://colab.research.google.com/assets/colab-badge.svg"
          ],
          "title": "Icon"
        },
        "url": {
          "description": "target URL",
          "examples": [
            "https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb"
          ],
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        }
      },
      "required": [
        "label",
        "url"
      ],
      "title": "generic.v0_2.BadgeDescr",
      "type": "object"
    },
    "BatchAxis": {
      "additionalProperties": false,
      "properties": {
        "id": {
          "default": "batch",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "batch",
          "title": "Type",
          "type": "string"
        },
        "size": {
          "anyOf": [
            {
              "const": 1,
              "type": "integer"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
          "title": "Size"
        }
      },
      "required": [
        "type"
      ],
      "title": "model.v0_5.BatchAxis",
      "type": "object"
    },
    "BiasRisksLimitations": {
      "additionalProperties": true,
      "description": "Known biases, risks, technical limitations, and recommendations for model use.",
      "properties": {
        "known_biases": {
          "default": "In general bioimage models may suffer from biases caused by:\n\n- Imaging protocol dependencies\n- Use of a specific cell type\n- Species-specific training data limitations\n\n",
          "description": "Biases in training data or model behavior.",
          "title": "Known Biases",
          "type": "string"
        },
        "risks": {
          "default": "Common risks in bioimage analysis include:\n\n- Erroneously assuming generalization to unseen experimental conditions\n- Trusting (overconfident) model outputs without validation\n- Misinterpretation of results\n\n",
          "description": "Potential risks in the context of bioimage analysis.",
          "title": "Risks",
          "type": "string"
        },
        "limitations": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Technical limitations and failure modes.",
          "title": "Limitations"
        },
        "recommendations": {
          "default": "Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
          "description": "Mitigation strategies regarding `known_biases`, `risks`, and `limitations`, as well as applicable best practices.\n\nConsider:\n- How to use a validation dataset?\n- How to manually validate?\n- Feasibility of domain adaptation for different experimental setups?",
          "title": "Recommendations",
          "type": "string"
        }
      },
      "title": "model.v0_5.BiasRisksLimitations",
      "type": "object"
    },
    "BinarizeAlongAxisKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [BinarizeDescr][]",
      "properties": {
        "threshold": {
          "description": "The fixed threshold values along `axis`",
          "items": {
            "type": "number"
          },
          "minItems": 1,
          "title": "Threshold",
          "type": "array"
        },
        "axis": {
          "description": "The `threshold` axis",
          "examples": [
            "channel"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        }
      },
      "required": [
        "threshold",
        "axis"
      ],
      "title": "model.v0_5.BinarizeAlongAxisKwargs",
      "type": "object"
    },
    "BinarizeDescr": {
      "additionalProperties": false,
      "description": "Binarize the tensor with a fixed threshold.\n\nValues above [BinarizeKwargs.threshold][]/[BinarizeAlongAxisKwargs.threshold][]\nwill be set to one, values below the threshold to zero.\n\nExamples:\n- in YAML\n    ```yaml\n    postprocessing:\n      - id: binarize\n        kwargs:\n          axis: 'channel'\n          threshold: [0.25, 0.5, 0.75]\n    ```\n- in Python:\n\n    >>> postprocessing = [BinarizeDescr(\n    ...   kwargs=BinarizeAlongAxisKwargs(\n    ...       axis=AxisId('channel'),\n    ...       threshold=[0.25, 0.5, 0.75],\n    ...   )\n    ... )]",
      "properties": {
        "id": {
          "const": "binarize",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "anyOf": [
            {
              "$ref": "#/$defs/BinarizeKwargs"
            },
            {
              "$ref": "#/$defs/BinarizeAlongAxisKwargs"
            }
          ],
          "title": "Kwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.BinarizeDescr",
      "type": "object"
    },
    "BinarizeKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [BinarizeDescr][]",
      "properties": {
        "threshold": {
          "description": "The fixed threshold",
          "title": "Threshold",
          "type": "number"
        }
      },
      "required": [
        "threshold"
      ],
      "title": "model.v0_5.BinarizeKwargs",
      "type": "object"
    },
    "ChannelAxis": {
      "additionalProperties": false,
      "properties": {
        "id": {
          "default": "channel",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "channel",
          "title": "Type",
          "type": "string"
        },
        "channel_names": {
          "items": {
            "minLength": 1,
            "title": "Identifier",
            "type": "string"
          },
          "minItems": 1,
          "title": "Channel Names",
          "type": "array"
        }
      },
      "required": [
        "type",
        "channel_names"
      ],
      "title": "model.v0_5.ChannelAxis",
      "type": "object"
    },
    "ClipDescr": {
      "additionalProperties": false,
      "description": "Set tensor values below min to min and above max to max.\n\nSee `ScaleRangeDescr` for examples.",
      "properties": {
        "id": {
          "const": "clip",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/ClipKwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.ClipDescr",
      "type": "object"
    },
    "ClipKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ClipDescr][]",
      "properties": {
        "min": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Minimum value for clipping.\n\nExclusive with [min_percentile][]",
          "title": "Min"
        },
        "min_percentile": {
          "anyOf": [
            {
              "exclusiveMaximum": 100,
              "minimum": 0,
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Minimum percentile for clipping.\n\nExclusive with [min][].\n\nIn range [0, 100).",
          "title": "Min Percentile"
        },
        "max": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Maximum value for clipping.\n\nExclusive with `max_percentile`.",
          "title": "Max"
        },
        "max_percentile": {
          "anyOf": [
            {
              "exclusiveMinimum": 1,
              "maximum": 100,
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Maximum percentile for clipping.\n\nExclusive with `max`.\n\nIn range (1, 100].",
          "title": "Max Percentile"
        },
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to determine percentiles jointly,\n\ni.e. axes to reduce to compute min/max from `min_percentile`/`max_percentile`.\nFor example to clip 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape with clipped values per channel, specify `axes=('batch', 'x', 'y')`.\nTo clip samples independently, leave out the 'batch' axis.\n\nOnly valid if `min_percentile` and/or `max_percentile` are set.\n\nDefault: Compute percentiles over all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        }
      },
      "title": "model.v0_5.ClipKwargs",
      "type": "object"
    },
    "DataDependentSize": {
      "additionalProperties": false,
      "properties": {
        "min": {
          "default": 1,
          "exclusiveMinimum": 0,
          "title": "Min",
          "type": "integer"
        },
        "max": {
          "anyOf": [
            {
              "exclusiveMinimum": 1,
              "type": "integer"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Max"
        }
      },
      "title": "model.v0_5.DataDependentSize",
      "type": "object"
    },
    "Datetime": {
      "description": "Timestamp in [ISO 8601](#https://en.wikipedia.org/wiki/ISO_8601) format\nwith a few restrictions listed [here](https://docs.python.org/3/library/datetime.html#datetime.datetime.fromisoformat).",
      "format": "date-time",
      "title": "Datetime",
      "type": "string"
    },
    "EnsureDtypeDescr": {
      "additionalProperties": false,
      "description": "Cast the tensor data type to `EnsureDtypeKwargs.dtype` (if not matching).\n\nThis can for example be used to ensure the inner neural network model gets a\ndifferent input tensor data type than the fully described bioimage.io model does.\n\nExamples:\n    The described bioimage.io model (incl. preprocessing) accepts any\n    float32-compatible tensor, normalizes it with percentiles and clipping and then\n    casts it to uint8, which is what the neural network in this example expects.\n    - in YAML\n        ```yaml\n        inputs:\n        - data:\n            type: float32  # described bioimage.io model is compatible with any float32 input tensor\n          preprocessing:\n          - id: scale_range\n              kwargs:\n              axes: ['y', 'x']\n              max_percentile: 99.8\n              min_percentile: 5.0\n          - id: clip\n              kwargs:\n              min: 0.0\n              max: 1.0\n          - id: ensure_dtype  # the neural network of the model requires uint8\n              kwargs:\n              dtype: uint8\n        ```\n    - in Python:\n        >>> preprocessing = [\n        ...     ScaleRangeDescr(\n        ...         kwargs=ScaleRangeKwargs(\n        ...           axes= (AxisId('y'), AxisId('x')),\n        ...           max_percentile= 99.8,\n        ...           min_percentile= 5.0,\n        ...         )\n        ...     ),\n        ...     ClipDescr(kwargs=ClipKwargs(min=0.0, max=1.0)),\n        ...     EnsureDtypeDescr(kwargs=EnsureDtypeKwargs(dtype=\"uint8\")),\n        ... ]",
      "properties": {
        "id": {
          "const": "ensure_dtype",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/EnsureDtypeKwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.EnsureDtypeDescr",
      "type": "object"
    },
    "EnsureDtypeKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [EnsureDtypeDescr][]",
      "properties": {
        "dtype": {
          "enum": [
            "float32",
            "float64",
            "uint8",
            "int8",
            "uint16",
            "int16",
            "uint32",
            "int32",
            "uint64",
            "int64",
            "bool"
          ],
          "title": "Dtype",
          "type": "string"
        }
      },
      "required": [
        "dtype"
      ],
      "title": "model.v0_5.EnsureDtypeKwargs",
      "type": "object"
    },
    "EnvironmentalImpact": {
      "additionalProperties": true,
      "description": "Environmental considerations for model training and deployment.\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
      "properties": {
        "hardware_type": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "GPU/CPU specifications",
          "title": "Hardware Type"
        },
        "hours_used": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Total compute hours",
          "title": "Hours Used"
        },
        "cloud_provider": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "If applicable",
          "title": "Cloud Provider"
        },
        "compute_region": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Geographic location",
          "title": "Compute Region"
        },
        "co2_emitted": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "kg CO2 equivalent\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
          "title": "Co2 Emitted"
        }
      },
      "title": "model.v0_5.EnvironmentalImpact",
      "type": "object"
    },
    "Evaluation": {
      "additionalProperties": true,
      "properties": {
        "model_id": {
          "anyOf": [
            {
              "minLength": 1,
              "title": "ModelId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Model being evaluated.",
          "title": "Model Id"
        },
        "dataset_id": {
          "description": "Dataset used for evaluation.",
          "minLength": 1,
          "title": "DatasetId",
          "type": "string"
        },
        "dataset_source": {
          "description": "Source of the dataset.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        "dataset_role": {
          "description": "Role of the dataset used for evaluation.\n\n- `train`: dataset was (part of) the training data\n- `validation`: dataset was (part of) the validation data used during training, e.g. used for model selection or hyperparameter tuning\n- `test`: dataset was (part of) the designated test data; not used during training or validation, but acquired from the same source/distribution as training data\n- `independent`: dataset is entirely independent test data; not used during training or validation, and acquired from a different source/distribution than training data\n- `unknown`: role of the dataset is unknown; choose this if you are not certain if (a subset) of the data was seen by the model during training.",
          "enum": [
            "train",
            "validation",
            "test",
            "independent",
            "unknown"
          ],
          "title": "Dataset Role",
          "type": "string"
        },
        "sample_count": {
          "description": "Number of evaluated samples.",
          "title": "Sample Count",
          "type": "integer"
        },
        "evaluation_factors": {
          "description": "(Abbreviations of) each evaluation factor.\n\nEvaluation factors are criteria along which model performance is evaluated, e.g. different image conditions\nlike 'low SNR', 'high cell density', or different biological conditions like 'cell type A', 'cell type B'.\nAn 'overall' factor may be included to summarize performance across all conditions.",
          "items": {
            "maxLength": 16,
            "type": "string"
          },
          "title": "Evaluation Factors",
          "type": "array"
        },
        "evaluation_factors_long": {
          "description": "Descriptions (long form) of each evaluation factor.",
          "items": {
            "type": "string"
          },
          "title": "Evaluation Factors Long",
          "type": "array"
        },
        "metrics": {
          "description": "(Abbreviations of) metrics used for evaluation.",
          "items": {
            "maxLength": 16,
            "type": "string"
          },
          "title": "Metrics",
          "type": "array"
        },
        "metrics_long": {
          "description": "Description of each metric used.",
          "items": {
            "type": "string"
          },
          "title": "Metrics Long",
          "type": "array"
        },
        "results": {
          "description": "Results for each metric (rows; outer list) and each evaluation factor (columns; inner list).",
          "items": {
            "items": {
              "anyOf": [
                {
                  "type": "string"
                },
                {
                  "type": "number"
                },
                {
                  "type": "integer"
                }
              ]
            },
            "type": "array"
          },
          "title": "Results",
          "type": "array"
        },
        "results_summary": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Interpretation of results for general audience.\n\nConsider:\n    - Overall model performance\n    - Comparison to existing methods\n    - Limitations and areas for improvement",
          "title": "Results Summary"
        }
      },
      "required": [
        "dataset_id",
        "dataset_source",
        "dataset_role",
        "sample_count",
        "evaluation_factors",
        "evaluation_factors_long",
        "metrics",
        "metrics_long",
        "results"
      ],
      "title": "model.v0_5.Evaluation",
      "type": "object"
    },
    "FileDescr": {
      "additionalProperties": false,
      "description": "A file description",
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "File source",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        }
      },
      "required": [
        "source"
      ],
      "title": "_internal.io.FileDescr",
      "type": "object"
    },
    "FixedZeroMeanUnitVarianceAlongAxisKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
      "properties": {
        "mean": {
          "description": "The mean value(s) to normalize with.",
          "items": {
            "type": "number"
          },
          "minItems": 1,
          "title": "Mean",
          "type": "array"
        },
        "std": {
          "description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
          "items": {
            "minimum": 1e-06,
            "type": "number"
          },
          "minItems": 1,
          "title": "Std",
          "type": "array"
        },
        "axis": {
          "description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
          "examples": [
            "channel",
            "index"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        }
      },
      "required": [
        "mean",
        "std",
        "axis"
      ],
      "title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
      "type": "object"
    },
    "FixedZeroMeanUnitVarianceDescr": {
      "additionalProperties": false,
      "description": "Subtract a given mean and divide by the standard deviation.\n\nNormalize with fixed, precomputed values for\n`FixedZeroMeanUnitVarianceKwargs.mean` and `FixedZeroMeanUnitVarianceKwargs.std`\nUse `FixedZeroMeanUnitVarianceAlongAxisKwargs` for independent scaling along given\naxes.\n\nExamples:\n1. scalar value for whole tensor\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: fixed_zero_mean_unit_variance\n        kwargs:\n          mean: 103.5\n          std: 13.7\n    ```\n    - in Python\n    >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n    ...   kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7)\n    ... )]\n\n2. independently along an axis\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: fixed_zero_mean_unit_variance\n        kwargs:\n          axis: channel\n          mean: [101.5, 102.5, 103.5]\n          std: [11.7, 12.7, 13.7]\n    ```\n    - in Python\n    >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n    ...   kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs(\n    ...     axis=AxisId(\"channel\"),\n    ...     mean=[101.5, 102.5, 103.5],\n    ...     std=[11.7, 12.7, 13.7],\n    ...   )\n    ... )]",
      "properties": {
        "id": {
          "const": "fixed_zero_mean_unit_variance",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "anyOf": [
            {
              "$ref": "#/$defs/FixedZeroMeanUnitVarianceKwargs"
            },
            {
              "$ref": "#/$defs/FixedZeroMeanUnitVarianceAlongAxisKwargs"
            }
          ],
          "title": "Kwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.FixedZeroMeanUnitVarianceDescr",
      "type": "object"
    },
    "FixedZeroMeanUnitVarianceKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
      "properties": {
        "mean": {
          "description": "The mean value to normalize with.",
          "title": "Mean",
          "type": "number"
        },
        "std": {
          "description": "The standard deviation value to normalize with.",
          "minimum": 1e-06,
          "title": "Std",
          "type": "number"
        }
      },
      "required": [
        "mean",
        "std"
      ],
      "title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
      "type": "object"
    },
    "IndexInputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/ParameterizedSize"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
          "examples": [
            10,
            {
              "min": 32,
              "step": 16
            },
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "index",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "index",
          "title": "Type",
          "type": "string"
        },
        "concatenable": {
          "default": false,
          "description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
          "title": "Concatenable",
          "type": "boolean"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.IndexInputAxis",
      "type": "object"
    },
    "IndexOutputAxis": {
      "additionalProperties": false,
      "properties": {
        "id": {
          "default": "index",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "index",
          "title": "Type",
          "type": "string"
        },
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/SizeReference"
            },
            {
              "$ref": "#/$defs/DataDependentSize"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset ([SizeReference][])\n- data dependent size using [DataDependentSize][] (size is only known after model inference)",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        }
      },
      "required": [
        "type",
        "size"
      ],
      "title": "model.v0_5.IndexOutputAxis",
      "type": "object"
    },
    "InputTensorDescr": {
      "additionalProperties": false,
      "properties": {
        "id": {
          "default": "input",
          "description": "Input tensor id.\nNo duplicates are allowed across all inputs and outputs.",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "free text description",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "axes": {
          "description": "tensor axes",
          "items": {
            "discriminator": {
              "mapping": {
                "batch": "#/$defs/BatchAxis",
                "channel": "#/$defs/ChannelAxis",
                "index": "#/$defs/IndexInputAxis",
                "space": "#/$defs/SpaceInputAxis",
                "time": "#/$defs/TimeInputAxis"
              },
              "propertyName": "type"
            },
            "oneOf": [
              {
                "$ref": "#/$defs/BatchAxis"
              },
              {
                "$ref": "#/$defs/ChannelAxis"
              },
              {
                "$ref": "#/$defs/IndexInputAxis"
              },
              {
                "$ref": "#/$defs/TimeInputAxis"
              },
              {
                "$ref": "#/$defs/SpaceInputAxis"
              }
            ]
          },
          "minItems": 1,
          "title": "Axes",
          "type": "array"
        },
        "test_tensor": {
          "anyOf": [
            {
              "$ref": "#/$defs/FileDescr"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
        },
        "sample_tensor": {
          "anyOf": [
            {
              "$ref": "#/$defs/FileDescr"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
        },
        "data": {
          "anyOf": [
            {
              "$ref": "#/$defs/NominalOrOrdinalDataDescr"
            },
            {
              "$ref": "#/$defs/IntervalOrRatioDataDescr"
            },
            {
              "items": {
                "anyOf": [
                  {
                    "$ref": "#/$defs/NominalOrOrdinalDataDescr"
                  },
                  {
                    "$ref": "#/$defs/IntervalOrRatioDataDescr"
                  }
                ]
              },
              "minItems": 1,
              "type": "array"
            }
          ],
          "default": {
            "type": "float32",
            "range": [
              null,
              null
            ],
            "unit": "arbitrary unit",
            "scale": 1.0,
            "offset": null
          },
          "description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
          "title": "Data"
        },
        "optional": {
          "default": false,
          "description": "indicates that this tensor may be `None`",
          "title": "Optional",
          "type": "boolean"
        },
        "preprocessing": {
          "description": "Description of how this input should be preprocessed.\n\nnotes:\n- If preprocessing does not start with an 'ensure_dtype' entry, it is added\n  to ensure an input tensor's data type matches the input tensor's data description.\n- If preprocessing does not end with an 'ensure_dtype' or 'binarize' entry, an\n  'ensure_dtype' step is added to ensure preprocessing steps are not unintentionally\n  changing the data type.",
          "items": {
            "discriminator": {
              "mapping": {
                "binarize": "#/$defs/BinarizeDescr",
                "clip": "#/$defs/ClipDescr",
                "ensure_dtype": "#/$defs/EnsureDtypeDescr",
                "fixed_zero_mean_unit_variance": "#/$defs/FixedZeroMeanUnitVarianceDescr",
                "scale_linear": "#/$defs/ScaleLinearDescr",
                "scale_range": "#/$defs/ScaleRangeDescr",
                "sigmoid": "#/$defs/SigmoidDescr",
                "softmax": "#/$defs/SoftmaxDescr",
                "zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
              },
              "propertyName": "id"
            },
            "oneOf": [
              {
                "$ref": "#/$defs/BinarizeDescr"
              },
              {
                "$ref": "#/$defs/ClipDescr"
              },
              {
                "$ref": "#/$defs/EnsureDtypeDescr"
              },
              {
                "$ref": "#/$defs/FixedZeroMeanUnitVarianceDescr"
              },
              {
                "$ref": "#/$defs/ScaleLinearDescr"
              },
              {
                "$ref": "#/$defs/ScaleRangeDescr"
              },
              {
                "$ref": "#/$defs/SigmoidDescr"
              },
              {
                "$ref": "#/$defs/SoftmaxDescr"
              },
              {
                "$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
              }
            ]
          },
          "title": "Preprocessing",
          "type": "array"
        }
      },
      "required": [
        "axes"
      ],
      "title": "model.v0_5.InputTensorDescr",
      "type": "object"
    },
    "IntervalOrRatioDataDescr": {
      "additionalProperties": false,
      "properties": {
        "type": {
          "default": "float32",
          "enum": [
            "float32",
            "float64",
            "uint8",
            "int8",
            "uint16",
            "int16",
            "uint32",
            "int32",
            "uint64",
            "int64"
          ],
          "examples": [
            "float32",
            "float64",
            "uint8",
            "uint16"
          ],
          "title": "Type",
          "type": "string"
        },
        "range": {
          "default": [
            null,
            null
          ],
          "description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
          "maxItems": 2,
          "minItems": 2,
          "prefixItems": [
            {
              "anyOf": [
                {
                  "type": "number"
                },
                {
                  "type": "null"
                }
              ]
            },
            {
              "anyOf": [
                {
                  "type": "number"
                },
                {
                  "type": "null"
                }
              ]
            }
          ],
          "title": "Range",
          "type": "array"
        },
        "unit": {
          "anyOf": [
            {
              "const": "arbitrary unit",
              "type": "string"
            },
            {
              "description": "An SI unit",
              "minLength": 1,
              "pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
              "title": "SiUnit",
              "type": "string"
            }
          ],
          "default": "arbitrary unit",
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "description": "Scale for data on an interval (or ratio) scale.",
          "title": "Scale",
          "type": "number"
        },
        "offset": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Offset for data on a ratio scale.",
          "title": "Offset"
        }
      },
      "title": "model.v0_5.IntervalOrRatioDataDescr",
      "type": "object"
    },
    "KerasHdf5WeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "Source of the weights file.",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "authors": {
          "anyOf": [
            {
              "items": {
                "$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
          "title": "Authors"
        },
        "parent": {
          "anyOf": [
            {
              "enum": [
                "keras_hdf5",
                "keras_v3",
                "onnx",
                "pytorch_state_dict",
                "tensorflow_js",
                "tensorflow_saved_model_bundle",
                "torchscript"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
          "examples": [
            "pytorch_state_dict"
          ],
          "title": "Parent"
        },
        "comment": {
          "default": "",
          "description": "A comment about this weights entry, for example how these weights were created.",
          "title": "Comment",
          "type": "string"
        },
        "tensorflow_version": {
          "$ref": "#/$defs/Version",
          "description": "TensorFlow version used to create these weights."
        }
      },
      "required": [
        "source",
        "tensorflow_version"
      ],
      "title": "model.v0_5.KerasHdf5WeightsDescr",
      "type": "object"
    },
    "KerasV3WeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "Source of the .keras weights file.",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "authors": {
          "anyOf": [
            {
              "items": {
                "$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
          "title": "Authors"
        },
        "parent": {
          "anyOf": [
            {
              "enum": [
                "keras_hdf5",
                "keras_v3",
                "onnx",
                "pytorch_state_dict",
                "tensorflow_js",
                "tensorflow_saved_model_bundle",
                "torchscript"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
          "examples": [
            "pytorch_state_dict"
          ],
          "title": "Parent"
        },
        "comment": {
          "default": "",
          "description": "A comment about this weights entry, for example how these weights were created.",
          "title": "Comment",
          "type": "string"
        },
        "keras_version": {
          "$ref": "#/$defs/Version",
          "description": "Keras version used to create these weights.",
          "ge": 3
        },
        "backend": {
          "description": "Keras backend used to create these weights.",
          "maxItems": 2,
          "minItems": 2,
          "prefixItems": [
            {
              "enum": [
                "tensorflow",
                "jax",
                "torch"
              ],
              "type": "string"
            },
            {
              "$ref": "#/$defs/Version"
            }
          ],
          "title": "Backend",
          "type": "array"
        }
      },
      "required": [
        "source",
        "keras_version",
        "backend"
      ],
      "title": "model.v0_5.KerasV3WeightsDescr",
      "type": "object"
    },
    "LinkedDataset": {
      "additionalProperties": false,
      "description": "Reference to a bioimage.io dataset.",
      "properties": {
        "version": {
          "anyOf": [
            {
              "$ref": "#/$defs/Version"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The version of the linked resource following SemVer 2.0."
        },
        "id": {
          "description": "A valid dataset `id` from the bioimage.io collection.",
          "minLength": 1,
          "title": "DatasetId",
          "type": "string"
        }
      },
      "required": [
        "id"
      ],
      "title": "dataset.v0_3.LinkedDataset",
      "type": "object"
    },
    "LinkedModel": {
      "additionalProperties": false,
      "description": "Reference to a bioimage.io model.",
      "properties": {
        "version": {
          "anyOf": [
            {
              "$ref": "#/$defs/Version"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The version of the linked resource following SemVer 2.0."
        },
        "id": {
          "description": "A valid model `id` from the bioimage.io collection.",
          "minLength": 1,
          "title": "ModelId",
          "type": "string"
        }
      },
      "required": [
        "id"
      ],
      "title": "model.v0_5.LinkedModel",
      "type": "object"
    },
    "NominalOrOrdinalDataDescr": {
      "additionalProperties": false,
      "properties": {
        "values": {
          "anyOf": [
            {
              "items": {
                "type": "integer"
              },
              "minItems": 1,
              "type": "array"
            },
            {
              "items": {
                "type": "number"
              },
              "minItems": 1,
              "type": "array"
            },
            {
              "items": {
                "type": "boolean"
              },
              "minItems": 1,
              "type": "array"
            },
            {
              "items": {
                "type": "string"
              },
              "minItems": 1,
              "type": "array"
            }
          ],
          "description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
          "title": "Values"
        },
        "type": {
          "default": "uint8",
          "enum": [
            "float32",
            "float64",
            "uint8",
            "int8",
            "uint16",
            "int16",
            "uint32",
            "int32",
            "uint64",
            "int64",
            "bool"
          ],
          "examples": [
            "float32",
            "uint8",
            "uint16",
            "int64",
            "bool"
          ],
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "const": "arbitrary unit",
              "type": "string"
            },
            {
              "description": "An SI unit",
              "minLength": 1,
              "pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
              "title": "SiUnit",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        }
      },
      "required": [
        "values"
      ],
      "title": "model.v0_5.NominalOrOrdinalDataDescr",
      "type": "object"
    },
    "OnnxWeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "Source of the weights file.",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "authors": {
          "anyOf": [
            {
              "items": {
                "$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
          "title": "Authors"
        },
        "parent": {
          "anyOf": [
            {
              "enum": [
                "keras_hdf5",
                "keras_v3",
                "onnx",
                "pytorch_state_dict",
                "tensorflow_js",
                "tensorflow_saved_model_bundle",
                "torchscript"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
          "examples": [
            "pytorch_state_dict"
          ],
          "title": "Parent"
        },
        "comment": {
          "default": "",
          "description": "A comment about this weights entry, for example how these weights were created.",
          "title": "Comment",
          "type": "string"
        },
        "opset_version": {
          "description": "ONNX opset version",
          "minimum": 7,
          "title": "Opset Version",
          "type": "integer"
        },
        "external_data": {
          "anyOf": [
            {
              "$ref": "#/$defs/FileDescr",
              "examples": [
                {
                  "source": "weights.onnx.data"
                }
              ]
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Source of the external ONNX data file holding the weights.\n(If present **source** holds the ONNX architecture without weights)."
        }
      },
      "required": [
        "source",
        "opset_version"
      ],
      "title": "model.v0_5.OnnxWeightsDescr",
      "type": "object"
    },
    "OutputTensorDescr": {
      "additionalProperties": false,
      "properties": {
        "id": {
          "default": "output",
          "description": "Output tensor id.\nNo duplicates are allowed across all inputs and outputs.",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "free text description",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "axes": {
          "description": "tensor axes",
          "items": {
            "discriminator": {
              "mapping": {
                "batch": "#/$defs/BatchAxis",
                "channel": "#/$defs/ChannelAxis",
                "index": "#/$defs/IndexOutputAxis",
                "space": {
                  "oneOf": [
                    {
                      "$ref": "#/$defs/SpaceOutputAxis"
                    },
                    {
                      "$ref": "#/$defs/SpaceOutputAxisWithHalo"
                    }
                  ]
                },
                "time": {
                  "oneOf": [
                    {
                      "$ref": "#/$defs/TimeOutputAxis"
                    },
                    {
                      "$ref": "#/$defs/TimeOutputAxisWithHalo"
                    }
                  ]
                }
              },
              "propertyName": "type"
            },
            "oneOf": [
              {
                "$ref": "#/$defs/BatchAxis"
              },
              {
                "$ref": "#/$defs/ChannelAxis"
              },
              {
                "$ref": "#/$defs/IndexOutputAxis"
              },
              {
                "oneOf": [
                  {
                    "$ref": "#/$defs/TimeOutputAxis"
                  },
                  {
                    "$ref": "#/$defs/TimeOutputAxisWithHalo"
                  }
                ]
              },
              {
                "oneOf": [
                  {
                    "$ref": "#/$defs/SpaceOutputAxis"
                  },
                  {
                    "$ref": "#/$defs/SpaceOutputAxisWithHalo"
                  }
                ]
              }
            ]
          },
          "minItems": 1,
          "title": "Axes",
          "type": "array"
        },
        "test_tensor": {
          "anyOf": [
            {
              "$ref": "#/$defs/FileDescr"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
        },
        "sample_tensor": {
          "anyOf": [
            {
              "$ref": "#/$defs/FileDescr"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
        },
        "data": {
          "anyOf": [
            {
              "$ref": "#/$defs/NominalOrOrdinalDataDescr"
            },
            {
              "$ref": "#/$defs/IntervalOrRatioDataDescr"
            },
            {
              "items": {
                "anyOf": [
                  {
                    "$ref": "#/$defs/NominalOrOrdinalDataDescr"
                  },
                  {
                    "$ref": "#/$defs/IntervalOrRatioDataDescr"
                  }
                ]
              },
              "minItems": 1,
              "type": "array"
            }
          ],
          "default": {
            "type": "float32",
            "range": [
              null,
              null
            ],
            "unit": "arbitrary unit",
            "scale": 1.0,
            "offset": null
          },
          "description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
          "title": "Data"
        },
        "postprocessing": {
          "description": "Description of how this output should be postprocessed.\n\nnote: `postprocessing` always ends with an 'ensure_dtype' operation.\n      If not given this is added to cast to this tensor's `data.type`.",
          "items": {
            "discriminator": {
              "mapping": {
                "binarize": "#/$defs/BinarizeDescr",
                "clip": "#/$defs/ClipDescr",
                "ensure_dtype": "#/$defs/EnsureDtypeDescr",
                "fixed_zero_mean_unit_variance": "#/$defs/FixedZeroMeanUnitVarianceDescr",
                "scale_linear": "#/$defs/ScaleLinearDescr",
                "scale_mean_variance": "#/$defs/ScaleMeanVarianceDescr",
                "scale_range": "#/$defs/ScaleRangeDescr",
                "sigmoid": "#/$defs/SigmoidDescr",
                "softmax": "#/$defs/SoftmaxDescr",
                "stardist_postprocessing": "#/$defs/StardistPostprocessingDescr",
                "zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
              },
              "propertyName": "id"
            },
            "oneOf": [
              {
                "$ref": "#/$defs/BinarizeDescr"
              },
              {
                "$ref": "#/$defs/ClipDescr"
              },
              {
                "$ref": "#/$defs/EnsureDtypeDescr"
              },
              {
                "$ref": "#/$defs/FixedZeroMeanUnitVarianceDescr"
              },
              {
                "$ref": "#/$defs/ScaleLinearDescr"
              },
              {
                "$ref": "#/$defs/ScaleMeanVarianceDescr"
              },
              {
                "$ref": "#/$defs/ScaleRangeDescr"
              },
              {
                "$ref": "#/$defs/SigmoidDescr"
              },
              {
                "$ref": "#/$defs/SoftmaxDescr"
              },
              {
                "$ref": "#/$defs/StardistPostprocessingDescr"
              },
              {
                "$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
              }
            ]
          },
          "title": "Postprocessing",
          "type": "array"
        }
      },
      "required": [
        "axes"
      ],
      "title": "model.v0_5.OutputTensorDescr",
      "type": "object"
    },
    "ParameterizedSize": {
      "additionalProperties": false,
      "description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n  This allows to adjust the axis size more generically.",
      "properties": {
        "min": {
          "exclusiveMinimum": 0,
          "title": "Min",
          "type": "integer"
        },
        "step": {
          "exclusiveMinimum": 0,
          "title": "Step",
          "type": "integer"
        }
      },
      "required": [
        "min",
        "step"
      ],
      "title": "model.v0_5.ParameterizedSize",
      "type": "object"
    },
    "PytorchStateDictWeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "Source of the weights file.",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "authors": {
          "anyOf": [
            {
              "items": {
                "$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
          "title": "Authors"
        },
        "parent": {
          "anyOf": [
            {
              "enum": [
                "keras_hdf5",
                "keras_v3",
                "onnx",
                "pytorch_state_dict",
                "tensorflow_js",
                "tensorflow_saved_model_bundle",
                "torchscript"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
          "examples": [
            "pytorch_state_dict"
          ],
          "title": "Parent"
        },
        "comment": {
          "default": "",
          "description": "A comment about this weights entry, for example how these weights were created.",
          "title": "Comment",
          "type": "string"
        },
        "architecture": {
          "anyOf": [
            {
              "$ref": "#/$defs/ArchitectureFromFileDescr"
            },
            {
              "$ref": "#/$defs/ArchitectureFromLibraryDescr"
            }
          ],
          "title": "Architecture"
        },
        "pytorch_version": {
          "$ref": "#/$defs/Version",
          "description": "Version of the PyTorch library used.\nIf `architecture.depencencies` is specified it has to include pytorch and any version pinning has to be compatible."
        },
        "dependencies": {
          "anyOf": [
            {
              "$ref": "#/$defs/FileDescr",
              "examples": [
                {
                  "source": "environment.yaml"
                }
              ]
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Custom depencies beyond pytorch described in a Conda environment file.\nAllows to specify custom dependencies, see conda docs:\n- [Exporting an environment file across platforms](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#exporting-an-environment-file-across-platforms)\n- [Creating an environment file manually](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually)\n\nThe conda environment file should include pytorch and any version pinning has to be compatible with\n**pytorch_version**."
        }
      },
      "required": [
        "source",
        "architecture",
        "pytorch_version"
      ],
      "title": "model.v0_5.PytorchStateDictWeightsDescr",
      "type": "object"
    },
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    },
    "ReproducibilityTolerance": {
      "additionalProperties": true,
      "description": "Describes what small numerical differences -- if any -- may be tolerated\nin the generated output when executing in different environments.\n\nA tensor element *output* is considered mismatched to the **test_tensor** if\nabs(*output* - **test_tensor**) > **absolute_tolerance** + **relative_tolerance** * abs(**test_tensor**).\n(Internally we call [numpy.testing.assert_allclose](https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html).)\n\nMotivation:\n    For testing we can request the respective deep learning frameworks to be as\n    reproducible as possible by setting seeds and chosing deterministic algorithms,\n    but differences in operating systems, available hardware and installed drivers\n    may still lead to numerical differences.",
      "properties": {
        "relative_tolerance": {
          "default": 0.001,
          "description": "Maximum relative tolerance of reproduced test tensor.",
          "maximum": 0.01,
          "minimum": 0,
          "title": "Relative Tolerance",
          "type": "number"
        },
        "absolute_tolerance": {
          "default": 0.001,
          "description": "Maximum absolute tolerance of reproduced test tensor.",
          "minimum": 0,
          "title": "Absolute Tolerance",
          "type": "number"
        },
        "mismatched_elements_per_million": {
          "default": 100,
          "description": "Maximum number of mismatched elements/pixels per million to tolerate.",
          "maximum": 1000,
          "minimum": 0,
          "title": "Mismatched Elements Per Million",
          "type": "integer"
        },
        "output_ids": {
          "default": [],
          "description": "Limits the output tensor IDs these reproducibility details apply to.",
          "items": {
            "maxLength": 32,
            "minLength": 1,
            "title": "TensorId",
            "type": "string"
          },
          "title": "Output Ids",
          "type": "array"
        },
        "weights_formats": {
          "default": [],
          "description": "Limits the weights formats these details apply to.",
          "items": {
            "enum": [
              "keras_hdf5",
              "keras_v3",
              "onnx",
              "pytorch_state_dict",
              "tensorflow_js",
              "tensorflow_saved_model_bundle",
              "torchscript"
            ],
            "type": "string"
          },
          "title": "Weights Formats",
          "type": "array"
        }
      },
      "title": "model.v0_5.ReproducibilityTolerance",
      "type": "object"
    },
    "RunMode": {
      "additionalProperties": false,
      "properties": {
        "name": {
          "anyOf": [
            {
              "const": "deepimagej",
              "type": "string"
            },
            {
              "type": "string"
            }
          ],
          "description": "Run mode name",
          "title": "Name"
        },
        "kwargs": {
          "additionalProperties": true,
          "description": "Run mode specific key word arguments",
          "title": "Kwargs",
          "type": "object"
        }
      },
      "required": [
        "name"
      ],
      "title": "model.v0_4.RunMode",
      "type": "object"
    },
    "ScaleLinearAlongAxisKwargs": {
      "additionalProperties": false,
      "description": "Key word arguments for [ScaleLinearDescr][]",
      "properties": {
        "axis": {
          "description": "The axis of gain and offset values.",
          "examples": [
            "channel"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "gain": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "items": {
                "type": "number"
              },
              "minItems": 1,
              "type": "array"
            }
          ],
          "default": 1.0,
          "description": "multiplicative factor",
          "title": "Gain"
        },
        "offset": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "items": {
                "type": "number"
              },
              "minItems": 1,
              "type": "array"
            }
          ],
          "default": 0.0,
          "description": "additive term",
          "title": "Offset"
        }
      },
      "required": [
        "axis"
      ],
      "title": "model.v0_5.ScaleLinearAlongAxisKwargs",
      "type": "object"
    },
    "ScaleLinearDescr": {
      "additionalProperties": false,
      "description": "Fixed linear scaling.\n\nExamples:\n  1. Scale with scalar gain and offset\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_linear\n        kwargs:\n          gain: 2.0\n          offset: 3.0\n    ```\n    - in Python:\n\n    >>> preprocessing = [\n    ...     ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0))\n    ... ]\n\n  2. Independent scaling along an axis\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_linear\n        kwargs:\n          axis: 'channel'\n          gain: [1.0, 2.0, 3.0]\n    ```\n    - in Python:\n\n    >>> preprocessing = [\n    ...     ScaleLinearDescr(\n    ...         kwargs=ScaleLinearAlongAxisKwargs(\n    ...             axis=AxisId(\"channel\"),\n    ...             gain=[1.0, 2.0, 3.0],\n    ...         )\n    ...     )\n    ... ]",
      "properties": {
        "id": {
          "const": "scale_linear",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "anyOf": [
            {
              "$ref": "#/$defs/ScaleLinearKwargs"
            },
            {
              "$ref": "#/$defs/ScaleLinearAlongAxisKwargs"
            }
          ],
          "title": "Kwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.ScaleLinearDescr",
      "type": "object"
    },
    "ScaleLinearKwargs": {
      "additionalProperties": false,
      "description": "Key word arguments for [ScaleLinearDescr][]",
      "properties": {
        "gain": {
          "default": 1.0,
          "description": "multiplicative factor",
          "title": "Gain",
          "type": "number"
        },
        "offset": {
          "default": 0.0,
          "description": "additive term",
          "title": "Offset",
          "type": "number"
        }
      },
      "title": "model.v0_5.ScaleLinearKwargs",
      "type": "object"
    },
    "ScaleMeanVarianceDescr": {
      "additionalProperties": false,
      "description": "Scale a tensor's data distribution to match another tensor's mean/std.\n`out  = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
      "properties": {
        "id": {
          "const": "scale_mean_variance",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/ScaleMeanVarianceKwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.ScaleMeanVarianceDescr",
      "type": "object"
    },
    "ScaleMeanVarianceKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ScaleMeanVarianceKwargs][]",
      "properties": {
        "reference_tensor": {
          "description": "ID of unprocessed input tensor to match.",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        },
        "eps": {
          "default": 1e-06,
          "description": "Epsilon for numeric stability:\n`out  = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
          "exclusiveMinimum": 0,
          "maximum": 0.1,
          "title": "Eps",
          "type": "number"
        }
      },
      "required": [
        "reference_tensor"
      ],
      "title": "model.v0_5.ScaleMeanVarianceKwargs",
      "type": "object"
    },
    "ScaleRangeDescr": {
      "additionalProperties": false,
      "description": "Scale with percentiles.\n\nExamples:\n1. Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_range\n        kwargs:\n          axes: ['y', 'x']\n          max_percentile: 99.8\n          min_percentile: 5.0\n    ```\n    - in Python\n\n    >>> preprocessing = [\n    ...     ScaleRangeDescr(\n    ...         kwargs=ScaleRangeKwargs(\n    ...           axes= (AxisId('y'), AxisId('x')),\n    ...           max_percentile= 99.8,\n    ...           min_percentile= 5.0,\n    ...         )\n    ...     )\n    ... ]\n\n  2. Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_range\n        kwargs:\n          axes: ['y', 'x']\n          max_percentile: 99.8\n          min_percentile: 5.0\n       - id: clip\n         kwargs:\n          min: 0.0\n          max: 1.0\n    ```\n    - in Python\n\n    >>> preprocessing = [\n    ...   ScaleRangeDescr(\n    ...     kwargs=ScaleRangeKwargs(\n    ...       axes= (AxisId('y'), AxisId('x')),\n    ...       max_percentile= 99.8,\n    ...       min_percentile= 5.0,\n    ...     )\n    ...   ),\n    ...   ClipDescr(\n    ...     kwargs=ClipKwargs(\n    ...       min=0.0,\n    ...       max=1.0,\n    ...     )\n    ...   ),\n    ... ]",
      "properties": {
        "id": {
          "const": "scale_range",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/ScaleRangeKwargs"
        }
      },
      "required": [
        "id"
      ],
      "title": "model.v0_5.ScaleRangeDescr",
      "type": "object"
    },
    "ScaleRangeKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ScaleRangeDescr][]\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
      "properties": {
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        },
        "min_percentile": {
          "default": 0.0,
          "description": "The lower percentile used to determine the value to align with zero.",
          "exclusiveMaximum": 100,
          "minimum": 0,
          "title": "Min Percentile",
          "type": "number"
        },
        "max_percentile": {
          "default": 100.0,
          "description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
          "exclusiveMinimum": 1,
          "maximum": 100,
          "title": "Max Percentile",
          "type": "number"
        },
        "eps": {
          "default": 1e-06,
          "description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
          "exclusiveMinimum": 0,
          "maximum": 0.1,
          "title": "Eps",
          "type": "number"
        },
        "reference_tensor": {
          "anyOf": [
            {
              "maxLength": 32,
              "minLength": 1,
              "title": "TensorId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "ID of the unprocessed input tensor to compute the percentiles from.\nDefault: The tensor itself.",
          "title": "Reference Tensor"
        }
      },
      "title": "model.v0_5.ScaleRangeKwargs",
      "type": "object"
    },
    "SigmoidDescr": {
      "additionalProperties": false,
      "description": "The logistic sigmoid function, a.k.a. expit function.\n\nExamples:\n- in YAML\n    ```yaml\n    postprocessing:\n      - id: sigmoid\n    ```\n- in Python:\n\n    >>> postprocessing = [SigmoidDescr()]",
      "properties": {
        "id": {
          "const": "sigmoid",
          "title": "Id",
          "type": "string"
        }
      },
      "required": [
        "id"
      ],
      "title": "model.v0_5.SigmoidDescr",
      "type": "object"
    },
    "SizeReference": {
      "additionalProperties": false,
      "description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n    `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n...     id=AxisId(\"h\"),\n...     size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n...     unit=\"millimeter\",\n...     scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
      "properties": {
        "tensor_id": {
          "description": "tensor id of the reference axis",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axis_id": {
          "description": "axis id of the reference axis",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "offset": {
          "default": 0,
          "title": "Offset",
          "type": "integer"
        }
      },
      "required": [
        "tensor_id",
        "axis_id"
      ],
      "title": "model.v0_5.SizeReference",
      "type": "object"
    },
    "SoftmaxDescr": {
      "additionalProperties": false,
      "description": "The softmax function.\n\nExamples:\n- in YAML\n    ```yaml\n    postprocessing:\n      - id: softmax\n        kwargs:\n          axis: channel\n    ```\n- in Python:\n\n    >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId(\"channel\")))]",
      "properties": {
        "id": {
          "const": "softmax",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/SoftmaxKwargs"
        }
      },
      "required": [
        "id"
      ],
      "title": "model.v0_5.SoftmaxDescr",
      "type": "object"
    },
    "SoftmaxKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [SoftmaxDescr][]",
      "properties": {
        "axis": {
          "default": "channel",
          "description": "The axis to apply the softmax function along.\nNote:\n    Defaults to 'channel' axis\n    (which may not exist, in which case\n    a different axis id has to be specified).",
          "examples": [
            "channel"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        }
      },
      "title": "model.v0_5.SoftmaxKwargs",
      "type": "object"
    },
    "SpaceInputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/ParameterizedSize"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
          "examples": [
            10,
            {
              "min": 32,
              "step": 16
            },
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "x",
          "examples": [
            "x",
            "y",
            "z"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "space",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attometer",
                "angstrom",
                "centimeter",
                "decimeter",
                "exameter",
                "femtometer",
                "foot",
                "gigameter",
                "hectometer",
                "inch",
                "kilometer",
                "megameter",
                "meter",
                "micrometer",
                "mile",
                "millimeter",
                "nanometer",
                "parsec",
                "petameter",
                "picometer",
                "terameter",
                "yard",
                "yoctometer",
                "yottameter",
                "zeptometer",
                "zettameter"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        },
        "concatenable": {
          "default": false,
          "description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
          "title": "Concatenable",
          "type": "boolean"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.SpaceInputAxis",
      "type": "object"
    },
    "SpaceOutputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "x",
          "examples": [
            "x",
            "y",
            "z"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "space",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attometer",
                "angstrom",
                "centimeter",
                "decimeter",
                "exameter",
                "femtometer",
                "foot",
                "gigameter",
                "hectometer",
                "inch",
                "kilometer",
                "megameter",
                "meter",
                "micrometer",
                "mile",
                "millimeter",
                "nanometer",
                "parsec",
                "petameter",
                "picometer",
                "terameter",
                "yard",
                "yoctometer",
                "yottameter",
                "zeptometer",
                "zettameter"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.SpaceOutputAxis",
      "type": "object"
    },
    "SpaceOutputAxisWithHalo": {
      "additionalProperties": false,
      "properties": {
        "halo": {
          "description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
          "minimum": 1,
          "title": "Halo",
          "type": "integer"
        },
        "size": {
          "$ref": "#/$defs/SizeReference",
          "description": "reference to another axis with an optional offset (see [SizeReference][])",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ]
        },
        "id": {
          "default": "x",
          "examples": [
            "x",
            "y",
            "z"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "space",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attometer",
                "angstrom",
                "centimeter",
                "decimeter",
                "exameter",
                "femtometer",
                "foot",
                "gigameter",
                "hectometer",
                "inch",
                "kilometer",
                "megameter",
                "meter",
                "micrometer",
                "mile",
                "millimeter",
                "nanometer",
                "parsec",
                "petameter",
                "picometer",
                "terameter",
                "yard",
                "yoctometer",
                "yottameter",
                "zeptometer",
                "zettameter"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        }
      },
      "required": [
        "halo",
        "size",
        "type"
      ],
      "title": "model.v0_5.SpaceOutputAxisWithHalo",
      "type": "object"
    },
    "StardistPostprocessingDescr": {
      "additionalProperties": false,
      "description": "Stardist postprocessing including non-maximum suppression and converting polygon representations to instance labels\n\nas described in:\n- Uwe Schmidt, Martin Weigert, Coleman Broaddus, and Gene Myers.\n[*Cell Detection with Star-convex Polygons*](https://arxiv.org/abs/1806.03535).\nInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Granada, Spain, September 2018.\n- Martin Weigert, Uwe Schmidt, Robert Haase, Ko Sugawara, and Gene Myers.\n[*Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy*](http://openaccess.thecvf.com/content_WACV_2020/papers/Weigert_Star-convex_Polyhedra_for_3D_Object_Detection_and_Segmentation_in_Microscopy_WACV_2020_paper.pdf).\nThe IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, Colorado, March 2020.\n\nNote: Only available if the `stardist` package is installed.",
      "properties": {
        "id": {
          "const": "stardist_postprocessing",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "anyOf": [
            {
              "$ref": "#/$defs/StardistPostprocessingKwargs2D"
            },
            {
              "$ref": "#/$defs/StardistPostprocessingKwargs3D"
            }
          ],
          "title": "Kwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.StardistPostprocessingDescr",
      "type": "object"
    },
    "StardistPostprocessingKwargs2D": {
      "additionalProperties": false,
      "properties": {
        "prob_threshold": {
          "description": "The probability threshold for object candidate selection.",
          "title": "Prob Threshold",
          "type": "number"
        },
        "nms_threshold": {
          "description": "The IoU threshold for non-maximum suppression.",
          "title": "Nms Threshold",
          "type": "number"
        },
        "grid": {
          "description": "Grid size of network predictions.",
          "maxItems": 2,
          "minItems": 2,
          "prefixItems": [
            {
              "type": "integer"
            },
            {
              "type": "integer"
            }
          ],
          "title": "Grid",
          "type": "array"
        },
        "b": {
          "anyOf": [
            {
              "type": "integer"
            },
            {
              "maxItems": 2,
              "minItems": 2,
              "prefixItems": [
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                },
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                }
              ],
              "type": "array"
            }
          ],
          "description": "Border region in which object probability is set to zero.",
          "title": "B"
        }
      },
      "required": [
        "prob_threshold",
        "nms_threshold",
        "grid",
        "b"
      ],
      "title": "model.v0_5.StardistPostprocessingKwargs2D",
      "type": "object"
    },
    "StardistPostprocessingKwargs3D": {
      "additionalProperties": false,
      "properties": {
        "prob_threshold": {
          "description": "The probability threshold for object candidate selection.",
          "title": "Prob Threshold",
          "type": "number"
        },
        "nms_threshold": {
          "description": "The IoU threshold for non-maximum suppression.",
          "title": "Nms Threshold",
          "type": "number"
        },
        "grid": {
          "description": "Grid size of network predictions.",
          "maxItems": 3,
          "minItems": 3,
          "prefixItems": [
            {
              "type": "integer"
            },
            {
              "type": "integer"
            },
            {
              "type": "integer"
            }
          ],
          "title": "Grid",
          "type": "array"
        },
        "b": {
          "anyOf": [
            {
              "type": "integer"
            },
            {
              "maxItems": 3,
              "minItems": 3,
              "prefixItems": [
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                },
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                },
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                }
              ],
              "type": "array"
            }
          ],
          "description": "Border region in which object probability is set to zero.",
          "title": "B"
        },
        "n_rays": {
          "description": "Number of rays for 3D star-convex polyhedra.",
          "title": "N Rays",
          "type": "integer"
        },
        "anisotropy": {
          "description": "Anisotropy factors for 3D star-convex polyhedra, i.e. the physical pixel size along each spatial axis.",
          "maxItems": 3,
          "minItems": 3,
          "prefixItems": [
            {
              "type": "number"
            },
            {
              "type": "number"
            },
            {
              "type": "number"
            }
          ],
          "title": "Anisotropy",
          "type": "array"
        },
        "overlap_label": {
          "anyOf": [
            {
              "type": "integer"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Optional label to apply to any area of overlapping predicted objects.",
          "title": "Overlap Label"
        }
      },
      "required": [
        "prob_threshold",
        "nms_threshold",
        "grid",
        "b",
        "n_rays",
        "anisotropy"
      ],
      "title": "model.v0_5.StardistPostprocessingKwargs3D",
      "type": "object"
    },
    "TensorflowJsWeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "authors": {
          "anyOf": [
            {
              "items": {
                "$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
          "title": "Authors"
        },
        "parent": {
          "anyOf": [
            {
              "enum": [
                "keras_hdf5",
                "keras_v3",
                "onnx",
                "pytorch_state_dict",
                "tensorflow_js",
                "tensorflow_saved_model_bundle",
                "torchscript"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
          "examples": [
            "pytorch_state_dict"
          ],
          "title": "Parent"
        },
        "comment": {
          "default": "",
          "description": "A comment about this weights entry, for example how these weights were created.",
          "title": "Comment",
          "type": "string"
        },
        "tensorflow_version": {
          "$ref": "#/$defs/Version",
          "description": "Version of the TensorFlow library used."
        }
      },
      "required": [
        "source",
        "tensorflow_version"
      ],
      "title": "model.v0_5.TensorflowJsWeightsDescr",
      "type": "object"
    },
    "TensorflowSavedModelBundleWeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "authors": {
          "anyOf": [
            {
              "items": {
                "$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
          "title": "Authors"
        },
        "parent": {
          "anyOf": [
            {
              "enum": [
                "keras_hdf5",
                "keras_v3",
                "onnx",
                "pytorch_state_dict",
                "tensorflow_js",
                "tensorflow_saved_model_bundle",
                "torchscript"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
          "examples": [
            "pytorch_state_dict"
          ],
          "title": "Parent"
        },
        "comment": {
          "default": "",
          "description": "A comment about this weights entry, for example how these weights were created.",
          "title": "Comment",
          "type": "string"
        },
        "tensorflow_version": {
          "$ref": "#/$defs/Version",
          "description": "Version of the TensorFlow library used."
        },
        "dependencies": {
          "anyOf": [
            {
              "$ref": "#/$defs/FileDescr",
              "examples": [
                {
                  "source": "environment.yaml"
                }
              ]
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Custom dependencies beyond tensorflow.\nShould include tensorflow and any version pinning has to be compatible with **tensorflow_version**."
        }
      },
      "required": [
        "source",
        "tensorflow_version"
      ],
      "title": "model.v0_5.TensorflowSavedModelBundleWeightsDescr",
      "type": "object"
    },
    "TimeInputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/ParameterizedSize"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
          "examples": [
            10,
            {
              "min": 32,
              "step": 16
            },
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "time",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "time",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attosecond",
                "centisecond",
                "day",
                "decisecond",
                "exasecond",
                "femtosecond",
                "gigasecond",
                "hectosecond",
                "hour",
                "kilosecond",
                "megasecond",
                "microsecond",
                "millisecond",
                "minute",
                "nanosecond",
                "petasecond",
                "picosecond",
                "second",
                "terasecond",
                "yoctosecond",
                "yottasecond",
                "zeptosecond",
                "zettasecond"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        },
        "concatenable": {
          "default": false,
          "description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
          "title": "Concatenable",
          "type": "boolean"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.TimeInputAxis",
      "type": "object"
    },
    "TimeOutputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "time",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "time",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attosecond",
                "centisecond",
                "day",
                "decisecond",
                "exasecond",
                "femtosecond",
                "gigasecond",
                "hectosecond",
                "hour",
                "kilosecond",
                "megasecond",
                "microsecond",
                "millisecond",
                "minute",
                "nanosecond",
                "petasecond",
                "picosecond",
                "second",
                "terasecond",
                "yoctosecond",
                "yottasecond",
                "zeptosecond",
                "zettasecond"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.TimeOutputAxis",
      "type": "object"
    },
    "TimeOutputAxisWithHalo": {
      "additionalProperties": false,
      "properties": {
        "halo": {
          "description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
          "minimum": 1,
          "title": "Halo",
          "type": "integer"
        },
        "size": {
          "$ref": "#/$defs/SizeReference",
          "description": "reference to another axis with an optional offset (see [SizeReference][])",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ]
        },
        "id": {
          "default": "time",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "time",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attosecond",
                "centisecond",
                "day",
                "decisecond",
                "exasecond",
                "femtosecond",
                "gigasecond",
                "hectosecond",
                "hour",
                "kilosecond",
                "megasecond",
                "microsecond",
                "millisecond",
                "minute",
                "nanosecond",
                "petasecond",
                "picosecond",
                "second",
                "terasecond",
                "yoctosecond",
                "yottasecond",
                "zeptosecond",
                "zettasecond"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        }
      },
      "required": [
        "halo",
        "size",
        "type"
      ],
      "title": "model.v0_5.TimeOutputAxisWithHalo",
      "type": "object"
    },
    "TorchscriptWeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "Source of the weights file.",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "authors": {
          "anyOf": [
            {
              "items": {
                "$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
          "title": "Authors"
        },
        "parent": {
          "anyOf": [
            {
              "enum": [
                "keras_hdf5",
                "keras_v3",
                "onnx",
                "pytorch_state_dict",
                "tensorflow_js",
                "tensorflow_saved_model_bundle",
                "torchscript"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
          "examples": [
            "pytorch_state_dict"
          ],
          "title": "Parent"
        },
        "comment": {
          "default": "",
          "description": "A comment about this weights entry, for example how these weights were created.",
          "title": "Comment",
          "type": "string"
        },
        "pytorch_version": {
          "$ref": "#/$defs/Version",
          "description": "Version of the PyTorch library used."
        }
      },
      "required": [
        "source",
        "pytorch_version"
      ],
      "title": "model.v0_5.TorchscriptWeightsDescr",
      "type": "object"
    },
    "TrainingDetails": {
      "additionalProperties": true,
      "properties": {
        "training_preprocessing": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Detailed image preprocessing steps during model training:\n\nMention:\n- *Normalization methods*\n- *Augmentation strategies*\n- *Resizing/resampling procedures*\n- *Artifact handling*",
          "title": "Training Preprocessing"
        },
        "training_epochs": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Number of training epochs.",
          "title": "Training Epochs"
        },
        "training_batch_size": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Batch size used in training.",
          "title": "Training Batch Size"
        },
        "initial_learning_rate": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Initial learning rate used in training.",
          "title": "Initial Learning Rate"
        },
        "learning_rate_schedule": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Learning rate schedule used in training.",
          "title": "Learning Rate Schedule"
        },
        "loss_function": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Loss function used in training, e.g. nn.MSELoss.",
          "title": "Loss Function"
        },
        "loss_function_kwargs": {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "description": "key word arguments for the `loss_function`",
          "title": "Loss Function Kwargs",
          "type": "object"
        },
        "optimizer": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "optimizer, e.g. torch.optim.Adam",
          "title": "Optimizer"
        },
        "optimizer_kwargs": {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "description": "key word arguments for the `optimizer`",
          "title": "Optimizer Kwargs",
          "type": "object"
        },
        "regularization": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Regularization techniques used during training, e.g. drop-out or weight decay.",
          "title": "Regularization"
        },
        "training_duration": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Total training duration in hours.",
          "title": "Training Duration"
        }
      },
      "title": "model.v0_5.TrainingDetails",
      "type": "object"
    },
    "Uploader": {
      "additionalProperties": false,
      "properties": {
        "email": {
          "description": "Email",
          "format": "email",
          "title": "Email",
          "type": "string"
        },
        "name": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "name",
          "title": "Name"
        }
      },
      "required": [
        "email"
      ],
      "title": "generic.v0_2.Uploader",
      "type": "object"
    },
    "Version": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        }
      ],
      "description": "wraps a packaging.version.Version instance for validation in pydantic models",
      "title": "Version"
    },
    "WeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "keras_hdf5": {
          "anyOf": [
            {
              "$ref": "#/$defs/KerasHdf5WeightsDescr"
            },
            {
              "type": "null"
            }
          ],
          "default": null
        },
        "keras_v3": {
          "anyOf": [
            {
              "$ref": "#/$defs/KerasV3WeightsDescr"
            },
            {
              "type": "null"
            }
          ],
          "default": null
        },
        "onnx": {
          "anyOf": [
            {
              "$ref": "#/$defs/OnnxWeightsDescr"
            },
            {
              "type": "null"
            }
          ],
          "default": null
        },
        "pytorch_state_dict": {
          "anyOf": [
            {
              "$ref": "#/$defs/PytorchStateDictWeightsDescr"
            },
            {
              "type": "null"
            }
          ],
          "default": null
        },
        "tensorflow_js": {
          "anyOf": [
            {
              "$ref": "#/$defs/TensorflowJsWeightsDescr"
            },
            {
              "type": "null"
            }
          ],
          "default": null
        },
        "tensorflow_saved_model_bundle": {
          "anyOf": [
            {
              "$ref": "#/$defs/TensorflowSavedModelBundleWeightsDescr"
            },
            {
              "type": "null"
            }
          ],
          "default": null
        },
        "torchscript": {
          "anyOf": [
            {
              "$ref": "#/$defs/TorchscriptWeightsDescr"
            },
            {
              "type": "null"
            }
          ],
          "default": null
        }
      },
      "title": "model.v0_5.WeightsDescr",
      "type": "object"
    },
    "YamlValue": {
      "anyOf": [
        {
          "type": "boolean"
        },
        {
          "format": "date",
          "type": "string"
        },
        {
          "format": "date-time",
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        },
        {
          "type": "string"
        },
        {
          "items": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "array"
        },
        {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "object"
        },
        {
          "type": "null"
        }
      ]
    },
    "ZeroMeanUnitVarianceDescr": {
      "additionalProperties": false,
      "description": "Subtract mean and divide by variance.\n\nExamples:\n    Subtract tensor mean and variance\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: zero_mean_unit_variance\n    ```\n    - in Python\n    >>> preprocessing = [ZeroMeanUnitVarianceDescr()]",
      "properties": {
        "id": {
          "const": "zero_mean_unit_variance",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
        }
      },
      "required": [
        "id"
      ],
      "title": "model.v0_5.ZeroMeanUnitVarianceDescr",
      "type": "object"
    },
    "ZeroMeanUnitVarianceKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ZeroMeanUnitVarianceDescr][]",
      "properties": {
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        },
        "eps": {
          "default": 1e-06,
          "description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
          "exclusiveMinimum": 0,
          "maximum": 0.1,
          "title": "Eps",
          "type": "number"
        }
      },
      "title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
      "type": "object"
    },
    "bioimageio__spec__dataset__v0_2__DatasetDescr": {
      "additionalProperties": false,
      "description": "A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage\nprocessing.",
      "properties": {
        "name": {
          "description": "A human-friendly name of the resource description",
          "minLength": 1,
          "title": "Name",
          "type": "string"
        },
        "description": {
          "title": "Description",
          "type": "string"
        },
        "covers": {
          "description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg', '.tif', '.tiff')",
          "examples": [
            [
              "cover.png"
            ]
          ],
          "items": {
            "anyOf": [
              {
                "description": "A URL with the HTTP or HTTPS scheme.",
                "format": "uri",
                "maxLength": 2083,
                "minLength": 1,
                "title": "HttpUrl",
                "type": "string"
              },
              {
                "$ref": "#/$defs/RelativeFilePath"
              },
              {
                "format": "file-path",
                "title": "FilePath",
                "type": "string"
              }
            ]
          },
          "title": "Covers",
          "type": "array"
        },
        "id_emoji": {
          "anyOf": [
            {
              "examples": [
                "\ud83e\udd88",
                "\ud83e\udda5"
              ],
              "maxLength": 1,
              "minLength": 1,
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "UTF-8 emoji for display alongside the `id`.",
          "title": "Id Emoji"
        },
        "authors": {
          "description": "The authors are the creators of the RDF and the primary points of contact.",
          "items": {
            "$ref": "#/$defs/bioimageio__spec__generic__v0_2__Author"
          },
          "title": "Authors",
          "type": "array"
        },
        "attachments": {
          "anyOf": [
            {
              "$ref": "#/$defs/AttachmentsDescr"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "file and other attachments"
        },
        "cite": {
          "description": "citations",
          "items": {
            "$ref": "#/$defs/bioimageio__spec__generic__v0_2__CiteEntry"
          },
          "title": "Cite",
          "type": "array"
        },
        "config": {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "description": "A field for custom configuration that can contain any keys not present in the RDF spec.\nThis means you should not store, for example, a github repo URL in `config` since we already have the\n`git_repo` field defined in the spec.\nKeys in `config` may be very specific to a tool or consumer software. To avoid conflicting definitions,\nit is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,\nfor example:\n```yaml\nconfig:\n    bioimageio:  # here is the domain name\n        my_custom_key: 3837283\n        another_key:\n            nested: value\n    imagej:       # config specific to ImageJ\n        macro_dir: path/to/macro/file\n```\nIf possible, please use [`snake_case`](https://en.wikipedia.org/wiki/Snake_case) for keys in `config`.\nYou may want to list linked files additionally under `attachments` to include them when packaging a resource\n(packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains\nan altered rdf.yaml file with local references to the downloaded files)",
          "examples": [
            {
              "bioimageio": {
                "another_key": {
                  "nested": "value"
                },
                "my_custom_key": 3837283
              },
              "imagej": {
                "macro_dir": "path/to/macro/file"
              }
            }
          ],
          "title": "Config",
          "type": "object"
        },
        "download_url": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "URL to download the resource from (deprecated)",
          "title": "Download Url"
        },
        "git_repo": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "A URL to the Git repository where the resource is being developed.",
          "examples": [
            "https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
          ],
          "title": "Git Repo"
        },
        "icon": {
          "anyOf": [
            {
              "maxLength": 2,
              "minLength": 1,
              "type": "string"
            },
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An icon for illustration",
          "title": "Icon"
        },
        "links": {
          "description": "IDs of other bioimage.io resources",
          "examples": [
            [
              "ilastik/ilastik",
              "deepimagej/deepimagej",
              "zero/notebook_u-net_3d_zerocostdl4mic"
            ]
          ],
          "items": {
            "type": "string"
          },
          "title": "Links",
          "type": "array"
        },
        "uploader": {
          "anyOf": [
            {
              "$ref": "#/$defs/Uploader"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The person who uploaded the model (e.g. to bioimage.io)"
        },
        "maintainers": {
          "description": "Maintainers of this resource.\nIf not specified `authors` are maintainers and at least some of them should specify their `github_user` name",
          "items": {
            "$ref": "#/$defs/bioimageio__spec__generic__v0_2__Maintainer"
          },
          "title": "Maintainers",
          "type": "array"
        },
        "rdf_source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Resource description file (RDF) source; used to keep track of where an rdf.yaml was loaded from.\nDo not set this field in a YAML file.",
          "title": "Rdf Source"
        },
        "tags": {
          "description": "Associated tags",
          "examples": [
            [
              "unet2d",
              "pytorch",
              "nucleus",
              "segmentation",
              "dsb2018"
            ]
          ],
          "items": {
            "type": "string"
          },
          "title": "Tags",
          "type": "array"
        },
        "version": {
          "anyOf": [
            {
              "$ref": "#/$defs/Version"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The version of the resource following SemVer 2.0."
        },
        "version_number": {
          "anyOf": [
            {
              "type": "integer"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "version number (n-th published version, not the semantic version)",
          "title": "Version Number"
        },
        "format_version": {
          "const": "0.2.4",
          "description": "The format version of this resource specification\n(not the `version` of the resource description)\nWhen creating a new resource always use the latest micro/patch version described here.\nThe `format_version` is important for any consumer software to understand how to parse the fields.",
          "title": "Format Version",
          "type": "string"
        },
        "badges": {
          "description": "badges associated with this resource",
          "items": {
            "$ref": "#/$defs/BadgeDescr"
          },
          "title": "Badges",
          "type": "array"
        },
        "documentation": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "URL or relative path to a markdown file with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.",
          "examples": [
            "https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
            "README.md"
          ],
          "title": "Documentation"
        },
        "license": {
          "anyOf": [
            {
              "enum": [
                "0BSD",
                "3D-Slicer-1.0",
                "AAL",
                "Abstyles",
                "AdaCore-doc",
                "Adobe-2006",
                "Adobe-Display-PostScript",
                "Adobe-Glyph",
                "Adobe-Utopia",
                "ADSL",
                "AFL-1.1",
                "AFL-1.2",
                "AFL-2.0",
                "AFL-2.1",
                "AFL-3.0",
                "Afmparse",
                "AGPL-1.0-only",
                "AGPL-1.0-or-later",
                "AGPL-3.0-only",
                "AGPL-3.0-or-later",
                "Aladdin",
                "AMD-newlib",
                "AMDPLPA",
                "AML",
                "AML-glslang",
                "AMPAS",
                "ANTLR-PD",
                "ANTLR-PD-fallback",
                "any-OSI",
                "any-OSI-perl-modules",
                "Apache-1.0",
                "Apache-1.1",
                "Apache-2.0",
                "APAFML",
                "APL-1.0",
                "App-s2p",
                "APSL-1.0",
                "APSL-1.1",
                "APSL-1.2",
                "APSL-2.0",
                "Arphic-1999",
                "Artistic-1.0",
                "Artistic-1.0-cl8",
                "Artistic-1.0-Perl",
                "Artistic-2.0",
                "Artistic-dist",
                "Aspell-RU",
                "ASWF-Digital-Assets-1.0",
                "ASWF-Digital-Assets-1.1",
                "Baekmuk",
                "Bahyph",
                "Barr",
                "bcrypt-Solar-Designer",
                "Beerware",
                "Bitstream-Charter",
                "Bitstream-Vera",
                "BitTorrent-1.0",
                "BitTorrent-1.1",
                "blessing",
                "BlueOak-1.0.0",
                "Boehm-GC",
                "Boehm-GC-without-fee",
                "Borceux",
                "Brian-Gladman-2-Clause",
                "Brian-Gladman-3-Clause",
                "BSD-1-Clause",
                "BSD-2-Clause",
                "BSD-2-Clause-Darwin",
                "BSD-2-Clause-first-lines",
                "BSD-2-Clause-Patent",
                "BSD-2-Clause-pkgconf-disclaimer",
                "BSD-2-Clause-Views",
                "BSD-3-Clause",
                "BSD-3-Clause-acpica",
                "BSD-3-Clause-Attribution",
                "BSD-3-Clause-Clear",
                "BSD-3-Clause-flex",
                "BSD-3-Clause-HP",
                "BSD-3-Clause-LBNL",
                "BSD-3-Clause-Modification",
                "BSD-3-Clause-No-Military-License",
                "BSD-3-Clause-No-Nuclear-License",
                "BSD-3-Clause-No-Nuclear-License-2014",
                "BSD-3-Clause-No-Nuclear-Warranty",
                "BSD-3-Clause-Open-MPI",
                "BSD-3-Clause-Sun",
                "BSD-4-Clause",
                "BSD-4-Clause-Shortened",
                "BSD-4-Clause-UC",
                "BSD-4.3RENO",
                "BSD-4.3TAHOE",
                "BSD-Advertising-Acknowledgement",
                "BSD-Attribution-HPND-disclaimer",
                "BSD-Inferno-Nettverk",
                "BSD-Protection",
                "BSD-Source-beginning-file",
                "BSD-Source-Code",
                "BSD-Systemics",
                "BSD-Systemics-W3Works",
                "BSL-1.0",
                "BUSL-1.1",
                "bzip2-1.0.6",
                "C-UDA-1.0",
                "CAL-1.0",
                "CAL-1.0-Combined-Work-Exception",
                "Caldera",
                "Caldera-no-preamble",
                "Catharon",
                "CATOSL-1.1",
                "CC-BY-1.0",
                "CC-BY-2.0",
                "CC-BY-2.5",
                "CC-BY-2.5-AU",
                "CC-BY-3.0",
                "CC-BY-3.0-AT",
                "CC-BY-3.0-AU",
                "CC-BY-3.0-DE",
                "CC-BY-3.0-IGO",
                "CC-BY-3.0-NL",
                "CC-BY-3.0-US",
                "CC-BY-4.0",
                "CC-BY-NC-1.0",
                "CC-BY-NC-2.0",
                "CC-BY-NC-2.5",
                "CC-BY-NC-3.0",
                "CC-BY-NC-3.0-DE",
                "CC-BY-NC-4.0",
                "CC-BY-NC-ND-1.0",
                "CC-BY-NC-ND-2.0",
                "CC-BY-NC-ND-2.5",
                "CC-BY-NC-ND-3.0",
                "CC-BY-NC-ND-3.0-DE",
                "CC-BY-NC-ND-3.0-IGO",
                "CC-BY-NC-ND-4.0",
                "CC-BY-NC-SA-1.0",
                "CC-BY-NC-SA-2.0",
                "CC-BY-NC-SA-2.0-DE",
                "CC-BY-NC-SA-2.0-FR",
                "CC-BY-NC-SA-2.0-UK",
                "CC-BY-NC-SA-2.5",
                "CC-BY-NC-SA-3.0",
                "CC-BY-NC-SA-3.0-DE",
                "CC-BY-NC-SA-3.0-IGO",
                "CC-BY-NC-SA-4.0",
                "CC-BY-ND-1.0",
                "CC-BY-ND-2.0",
                "CC-BY-ND-2.5",
                "CC-BY-ND-3.0",
                "CC-BY-ND-3.0-DE",
                "CC-BY-ND-4.0",
                "CC-BY-SA-1.0",
                "CC-BY-SA-2.0",
                "CC-BY-SA-2.0-UK",
                "CC-BY-SA-2.1-JP",
                "CC-BY-SA-2.5",
                "CC-BY-SA-3.0",
                "CC-BY-SA-3.0-AT",
                "CC-BY-SA-3.0-DE",
                "CC-BY-SA-3.0-IGO",
                "CC-BY-SA-4.0",
                "CC-PDDC",
                "CC-PDM-1.0",
                "CC-SA-1.0",
                "CC0-1.0",
                "CDDL-1.0",
                "CDDL-1.1",
                "CDL-1.0",
                "CDLA-Permissive-1.0",
                "CDLA-Permissive-2.0",
                "CDLA-Sharing-1.0",
                "CECILL-1.0",
                "CECILL-1.1",
                "CECILL-2.0",
                "CECILL-2.1",
                "CECILL-B",
                "CECILL-C",
                "CERN-OHL-1.1",
                "CERN-OHL-1.2",
                "CERN-OHL-P-2.0",
                "CERN-OHL-S-2.0",
                "CERN-OHL-W-2.0",
                "CFITSIO",
                "check-cvs",
                "checkmk",
                "ClArtistic",
                "Clips",
                "CMU-Mach",
                "CMU-Mach-nodoc",
                "CNRI-Jython",
                "CNRI-Python",
                "CNRI-Python-GPL-Compatible",
                "COIL-1.0",
                "Community-Spec-1.0",
                "Condor-1.1",
                "copyleft-next-0.3.0",
                "copyleft-next-0.3.1",
                "Cornell-Lossless-JPEG",
                "CPAL-1.0",
                "CPL-1.0",
                "CPOL-1.02",
                "Cronyx",
                "Crossword",
                "CryptoSwift",
                "CrystalStacker",
                "CUA-OPL-1.0",
                "Cube",
                "curl",
                "cve-tou",
                "D-FSL-1.0",
                "DEC-3-Clause",
                "diffmark",
                "DL-DE-BY-2.0",
                "DL-DE-ZERO-2.0",
                "DOC",
                "DocBook-DTD",
                "DocBook-Schema",
                "DocBook-Stylesheet",
                "DocBook-XML",
                "Dotseqn",
                "DRL-1.0",
                "DRL-1.1",
                "DSDP",
                "dtoa",
                "dvipdfm",
                "ECL-1.0",
                "ECL-2.0",
                "EFL-1.0",
                "EFL-2.0",
                "eGenix",
                "Elastic-2.0",
                "Entessa",
                "EPICS",
                "EPL-1.0",
                "EPL-2.0",
                "ErlPL-1.1",
                "etalab-2.0",
                "EUDatagrid",
                "EUPL-1.0",
                "EUPL-1.1",
                "EUPL-1.2",
                "Eurosym",
                "Fair",
                "FBM",
                "FDK-AAC",
                "Ferguson-Twofish",
                "Frameworx-1.0",
                "FreeBSD-DOC",
                "FreeImage",
                "FSFAP",
                "FSFAP-no-warranty-disclaimer",
                "FSFUL",
                "FSFULLR",
                "FSFULLRSD",
                "FSFULLRWD",
                "FSL-1.1-ALv2",
                "FSL-1.1-MIT",
                "FTL",
                "Furuseth",
                "fwlw",
                "Game-Programming-Gems",
                "GCR-docs",
                "GD",
                "generic-xts",
                "GFDL-1.1-invariants-only",
                "GFDL-1.1-invariants-or-later",
                "GFDL-1.1-no-invariants-only",
                "GFDL-1.1-no-invariants-or-later",
                "GFDL-1.1-only",
                "GFDL-1.1-or-later",
                "GFDL-1.2-invariants-only",
                "GFDL-1.2-invariants-or-later",
                "GFDL-1.2-no-invariants-only",
                "GFDL-1.2-no-invariants-or-later",
                "GFDL-1.2-only",
                "GFDL-1.2-or-later",
                "GFDL-1.3-invariants-only",
                "GFDL-1.3-invariants-or-later",
                "GFDL-1.3-no-invariants-only",
                "GFDL-1.3-no-invariants-or-later",
                "GFDL-1.3-only",
                "GFDL-1.3-or-later",
                "Giftware",
                "GL2PS",
                "Glide",
                "Glulxe",
                "GLWTPL",
                "gnuplot",
                "GPL-1.0-only",
                "GPL-1.0-or-later",
                "GPL-2.0-only",
                "GPL-2.0-or-later",
                "GPL-3.0-only",
                "GPL-3.0-or-later",
                "Graphics-Gems",
                "gSOAP-1.3b",
                "gtkbook",
                "Gutmann",
                "HaskellReport",
                "HDF5",
                "hdparm",
                "HIDAPI",
                "Hippocratic-2.1",
                "HP-1986",
                "HP-1989",
                "HPND",
                "HPND-DEC",
                "HPND-doc",
                "HPND-doc-sell",
                "HPND-export-US",
                "HPND-export-US-acknowledgement",
                "HPND-export-US-modify",
                "HPND-export2-US",
                "HPND-Fenneberg-Livingston",
                "HPND-INRIA-IMAG",
                "HPND-Intel",
                "HPND-Kevlin-Henney",
                "HPND-Markus-Kuhn",
                "HPND-merchantability-variant",
                "HPND-MIT-disclaimer",
                "HPND-Netrek",
                "HPND-Pbmplus",
                "HPND-sell-MIT-disclaimer-xserver",
                "HPND-sell-regexpr",
                "HPND-sell-variant",
                "HPND-sell-variant-MIT-disclaimer",
                "HPND-sell-variant-MIT-disclaimer-rev",
                "HPND-UC",
                "HPND-UC-export-US",
                "HTMLTIDY",
                "IBM-pibs",
                "ICU",
                "IEC-Code-Components-EULA",
                "IJG",
                "IJG-short",
                "ImageMagick",
                "iMatix",
                "Imlib2",
                "Info-ZIP",
                "Inner-Net-2.0",
                "InnoSetup",
                "Intel",
                "Intel-ACPI",
                "Interbase-1.0",
                "IPA",
                "IPL-1.0",
                "ISC",
                "ISC-Veillard",
                "Jam",
                "JasPer-2.0",
                "jove",
                "JPL-image",
                "JPNIC",
                "JSON",
                "Kastrup",
                "Kazlib",
                "Knuth-CTAN",
                "LAL-1.2",
                "LAL-1.3",
                "Latex2e",
                "Latex2e-translated-notice",
                "Leptonica",
                "LGPL-2.0-only",
                "LGPL-2.0-or-later",
                "LGPL-2.1-only",
                "LGPL-2.1-or-later",
                "LGPL-3.0-only",
                "LGPL-3.0-or-later",
                "LGPLLR",
                "Libpng",
                "libpng-1.6.35",
                "libpng-2.0",
                "libselinux-1.0",
                "libtiff",
                "libutil-David-Nugent",
                "LiLiQ-P-1.1",
                "LiLiQ-R-1.1",
                "LiLiQ-Rplus-1.1",
                "Linux-man-pages-1-para",
                "Linux-man-pages-copyleft",
                "Linux-man-pages-copyleft-2-para",
                "Linux-man-pages-copyleft-var",
                "Linux-OpenIB",
                "LOOP",
                "LPD-document",
                "LPL-1.0",
                "LPL-1.02",
                "LPPL-1.0",
                "LPPL-1.1",
                "LPPL-1.2",
                "LPPL-1.3a",
                "LPPL-1.3c",
                "lsof",
                "Lucida-Bitmap-Fonts",
                "LZMA-SDK-9.11-to-9.20",
                "LZMA-SDK-9.22",
                "Mackerras-3-Clause",
                "Mackerras-3-Clause-acknowledgment",
                "magaz",
                "mailprio",
                "MakeIndex",
                "man2html",
                "Martin-Birgmeier",
                "McPhee-slideshow",
                "metamail",
                "Minpack",
                "MIPS",
                "MirOS",
                "MIT",
                "MIT-0",
                "MIT-advertising",
                "MIT-Click",
                "MIT-CMU",
                "MIT-enna",
                "MIT-feh",
                "MIT-Festival",
                "MIT-Khronos-old",
                "MIT-Modern-Variant",
                "MIT-open-group",
                "MIT-testregex",
                "MIT-Wu",
                "MITNFA",
                "MMIXware",
                "Motosoto",
                "MPEG-SSG",
                "mpi-permissive",
                "mpich2",
                "MPL-1.0",
                "MPL-1.1",
                "MPL-2.0",
                "MPL-2.0-no-copyleft-exception",
                "mplus",
                "MS-LPL",
                "MS-PL",
                "MS-RL",
                "MTLL",
                "MulanPSL-1.0",
                "MulanPSL-2.0",
                "Multics",
                "Mup",
                "NAIST-2003",
                "NASA-1.3",
                "Naumen",
                "NBPL-1.0",
                "NCBI-PD",
                "NCGL-UK-2.0",
                "NCL",
                "NCSA",
                "NetCDF",
                "Newsletr",
                "NGPL",
                "ngrep",
                "NICTA-1.0",
                "NIST-PD",
                "NIST-PD-fallback",
                "NIST-Software",
                "NLOD-1.0",
                "NLOD-2.0",
                "NLPL",
                "Nokia",
                "NOSL",
                "Noweb",
                "NPL-1.0",
                "NPL-1.1",
                "NPOSL-3.0",
                "NRL",
                "NTIA-PD",
                "NTP",
                "NTP-0",
                "O-UDA-1.0",
                "OAR",
                "OCCT-PL",
                "OCLC-2.0",
                "ODbL-1.0",
                "ODC-By-1.0",
                "OFFIS",
                "OFL-1.0",
                "OFL-1.0-no-RFN",
                "OFL-1.0-RFN",
                "OFL-1.1",
                "OFL-1.1-no-RFN",
                "OFL-1.1-RFN",
                "OGC-1.0",
                "OGDL-Taiwan-1.0",
                "OGL-Canada-2.0",
                "OGL-UK-1.0",
                "OGL-UK-2.0",
                "OGL-UK-3.0",
                "OGTSL",
                "OLDAP-1.1",
                "OLDAP-1.2",
                "OLDAP-1.3",
                "OLDAP-1.4",
                "OLDAP-2.0",
                "OLDAP-2.0.1",
                "OLDAP-2.1",
                "OLDAP-2.2",
                "OLDAP-2.2.1",
                "OLDAP-2.2.2",
                "OLDAP-2.3",
                "OLDAP-2.4",
                "OLDAP-2.5",
                "OLDAP-2.6",
                "OLDAP-2.7",
                "OLDAP-2.8",
                "OLFL-1.3",
                "OML",
                "OpenPBS-2.3",
                "OpenSSL",
                "OpenSSL-standalone",
                "OpenVision",
                "OPL-1.0",
                "OPL-UK-3.0",
                "OPUBL-1.0",
                "OSET-PL-2.1",
                "OSL-1.0",
                "OSL-1.1",
                "OSL-2.0",
                "OSL-2.1",
                "OSL-3.0",
                "PADL",
                "Parity-6.0.0",
                "Parity-7.0.0",
                "PDDL-1.0",
                "PHP-3.0",
                "PHP-3.01",
                "Pixar",
                "pkgconf",
                "Plexus",
                "pnmstitch",
                "PolyForm-Noncommercial-1.0.0",
                "PolyForm-Small-Business-1.0.0",
                "PostgreSQL",
                "PPL",
                "PSF-2.0",
                "psfrag",
                "psutils",
                "Python-2.0",
                "Python-2.0.1",
                "python-ldap",
                "Qhull",
                "QPL-1.0",
                "QPL-1.0-INRIA-2004",
                "radvd",
                "Rdisc",
                "RHeCos-1.1",
                "RPL-1.1",
                "RPL-1.5",
                "RPSL-1.0",
                "RSA-MD",
                "RSCPL",
                "Ruby",
                "Ruby-pty",
                "SAX-PD",
                "SAX-PD-2.0",
                "Saxpath",
                "SCEA",
                "SchemeReport",
                "Sendmail",
                "Sendmail-8.23",
                "Sendmail-Open-Source-1.1",
                "SGI-B-1.0",
                "SGI-B-1.1",
                "SGI-B-2.0",
                "SGI-OpenGL",
                "SGP4",
                "SHL-0.5",
                "SHL-0.51",
                "SimPL-2.0",
                "SISSL",
                "SISSL-1.2",
                "SL",
                "Sleepycat",
                "SMAIL-GPL",
                "SMLNJ",
                "SMPPL",
                "SNIA",
                "snprintf",
                "SOFA",
                "softSurfer",
                "Soundex",
                "Spencer-86",
                "Spencer-94",
                "Spencer-99",
                "SPL-1.0",
                "ssh-keyscan",
                "SSH-OpenSSH",
                "SSH-short",
                "SSLeay-standalone",
                "SSPL-1.0",
                "SugarCRM-1.1.3",
                "SUL-1.0",
                "Sun-PPP",
                "Sun-PPP-2000",
                "SunPro",
                "SWL",
                "swrule",
                "Symlinks",
                "TAPR-OHL-1.0",
                "TCL",
                "TCP-wrappers",
                "TermReadKey",
                "TGPPL-1.0",
                "ThirdEye",
                "threeparttable",
                "TMate",
                "TORQUE-1.1",
                "TOSL",
                "TPDL",
                "TPL-1.0",
                "TrustedQSL",
                "TTWL",
                "TTYP0",
                "TU-Berlin-1.0",
                "TU-Berlin-2.0",
                "Ubuntu-font-1.0",
                "UCAR",
                "UCL-1.0",
                "ulem",
                "UMich-Merit",
                "Unicode-3.0",
                "Unicode-DFS-2015",
                "Unicode-DFS-2016",
                "Unicode-TOU",
                "UnixCrypt",
                "Unlicense",
                "Unlicense-libtelnet",
                "Unlicense-libwhirlpool",
                "UPL-1.0",
                "URT-RLE",
                "Vim",
                "VOSTROM",
                "VSL-1.0",
                "W3C",
                "W3C-19980720",
                "W3C-20150513",
                "w3m",
                "Watcom-1.0",
                "Widget-Workshop",
                "Wsuipa",
                "WTFPL",
                "wwl",
                "X11",
                "X11-distribute-modifications-variant",
                "X11-swapped",
                "Xdebug-1.03",
                "Xerox",
                "Xfig",
                "XFree86-1.1",
                "xinetd",
                "xkeyboard-config-Zinoviev",
                "xlock",
                "Xnet",
                "xpp",
                "XSkat",
                "xzoom",
                "YPL-1.0",
                "YPL-1.1",
                "Zed",
                "Zeeff",
                "Zend-2.0",
                "Zimbra-1.3",
                "Zimbra-1.4",
                "Zlib",
                "zlib-acknowledgement",
                "ZPL-1.1",
                "ZPL-2.0",
                "ZPL-2.1"
              ],
              "title": "LicenseId",
              "type": "string"
            },
            {
              "enum": [
                "AGPL-1.0",
                "AGPL-3.0",
                "BSD-2-Clause-FreeBSD",
                "BSD-2-Clause-NetBSD",
                "bzip2-1.0.5",
                "eCos-2.0",
                "GFDL-1.1",
                "GFDL-1.2",
                "GFDL-1.3",
                "GPL-1.0",
                "GPL-1.0+",
                "GPL-2.0",
                "GPL-2.0+",
                "GPL-2.0-with-autoconf-exception",
                "GPL-2.0-with-bison-exception",
                "GPL-2.0-with-classpath-exception",
                "GPL-2.0-with-font-exception",
                "GPL-2.0-with-GCC-exception",
                "GPL-3.0",
                "GPL-3.0+",
                "GPL-3.0-with-autoconf-exception",
                "GPL-3.0-with-GCC-exception",
                "LGPL-2.0",
                "LGPL-2.0+",
                "LGPL-2.1",
                "LGPL-2.1+",
                "LGPL-3.0",
                "LGPL-3.0+",
                "Net-SNMP",
                "Nunit",
                "StandardML-NJ",
                "wxWindows"
              ],
              "title": "DeprecatedLicenseId",
              "type": "string"
            },
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose\n) to discuss your intentions with the community.",
          "examples": [
            "CC0-1.0",
            "MIT",
            "BSD-2-Clause"
          ],
          "title": "License"
        },
        "type": {
          "const": "dataset",
          "title": "Type",
          "type": "string"
        },
        "id": {
          "anyOf": [
            {
              "minLength": 1,
              "title": "DatasetId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
          "title": "Id"
        },
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "\"URL to the source of the dataset.",
          "title": "Source"
        }
      },
      "required": [
        "name",
        "description",
        "format_version",
        "type"
      ],
      "title": "dataset 0.2.4",
      "type": "object"
    },
    "bioimageio__spec__dataset__v0_3__DatasetDescr": {
      "additionalProperties": false,
      "description": "A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage\nprocessing.",
      "properties": {
        "name": {
          "description": "A human-friendly name of the resource description.\nMay only contains letters, digits, underscore, minus, parentheses and spaces.",
          "maxLength": 128,
          "minLength": 5,
          "title": "Name",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A string containing a brief description.",
          "maxLength": 1024,
          "title": "Description",
          "type": "string"
        },
        "covers": {
          "description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1 or 1:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg')",
          "examples": [
            [
              "cover.png"
            ]
          ],
          "items": {
            "anyOf": [
              {
                "description": "A URL with the HTTP or HTTPS scheme.",
                "format": "uri",
                "maxLength": 2083,
                "minLength": 1,
                "title": "HttpUrl",
                "type": "string"
              },
              {
                "$ref": "#/$defs/RelativeFilePath"
              },
              {
                "format": "file-path",
                "title": "FilePath",
                "type": "string"
              }
            ]
          },
          "title": "Covers",
          "type": "array"
        },
        "id_emoji": {
          "anyOf": [
            {
              "examples": [
                "\ud83e\udd88",
                "\ud83e\udda5"
              ],
              "maxLength": 2,
              "minLength": 1,
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "UTF-8 emoji for display alongside the `id`.",
          "title": "Id Emoji"
        },
        "authors": {
          "description": "The authors are the creators of this resource description and the primary points of contact.",
          "items": {
            "$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
          },
          "title": "Authors",
          "type": "array"
        },
        "attachments": {
          "description": "file attachments",
          "items": {
            "$ref": "#/$defs/FileDescr"
          },
          "title": "Attachments",
          "type": "array"
        },
        "cite": {
          "description": "citations",
          "items": {
            "$ref": "#/$defs/bioimageio__spec__generic__v0_3__CiteEntry"
          },
          "title": "Cite",
          "type": "array"
        },
        "license": {
          "anyOf": [
            {
              "enum": [
                "0BSD",
                "3D-Slicer-1.0",
                "AAL",
                "Abstyles",
                "AdaCore-doc",
                "Adobe-2006",
                "Adobe-Display-PostScript",
                "Adobe-Glyph",
                "Adobe-Utopia",
                "ADSL",
                "AFL-1.1",
                "AFL-1.2",
                "AFL-2.0",
                "AFL-2.1",
                "AFL-3.0",
                "Afmparse",
                "AGPL-1.0-only",
                "AGPL-1.0-or-later",
                "AGPL-3.0-only",
                "AGPL-3.0-or-later",
                "Aladdin",
                "AMD-newlib",
                "AMDPLPA",
                "AML",
                "AML-glslang",
                "AMPAS",
                "ANTLR-PD",
                "ANTLR-PD-fallback",
                "any-OSI",
                "any-OSI-perl-modules",
                "Apache-1.0",
                "Apache-1.1",
                "Apache-2.0",
                "APAFML",
                "APL-1.0",
                "App-s2p",
                "APSL-1.0",
                "APSL-1.1",
                "APSL-1.2",
                "APSL-2.0",
                "Arphic-1999",
                "Artistic-1.0",
                "Artistic-1.0-cl8",
                "Artistic-1.0-Perl",
                "Artistic-2.0",
                "Artistic-dist",
                "Aspell-RU",
                "ASWF-Digital-Assets-1.0",
                "ASWF-Digital-Assets-1.1",
                "Baekmuk",
                "Bahyph",
                "Barr",
                "bcrypt-Solar-Designer",
                "Beerware",
                "Bitstream-Charter",
                "Bitstream-Vera",
                "BitTorrent-1.0",
                "BitTorrent-1.1",
                "blessing",
                "BlueOak-1.0.0",
                "Boehm-GC",
                "Boehm-GC-without-fee",
                "Borceux",
                "Brian-Gladman-2-Clause",
                "Brian-Gladman-3-Clause",
                "BSD-1-Clause",
                "BSD-2-Clause",
                "BSD-2-Clause-Darwin",
                "BSD-2-Clause-first-lines",
                "BSD-2-Clause-Patent",
                "BSD-2-Clause-pkgconf-disclaimer",
                "BSD-2-Clause-Views",
                "BSD-3-Clause",
                "BSD-3-Clause-acpica",
                "BSD-3-Clause-Attribution",
                "BSD-3-Clause-Clear",
                "BSD-3-Clause-flex",
                "BSD-3-Clause-HP",
                "BSD-3-Clause-LBNL",
                "BSD-3-Clause-Modification",
                "BSD-3-Clause-No-Military-License",
                "BSD-3-Clause-No-Nuclear-License",
                "BSD-3-Clause-No-Nuclear-License-2014",
                "BSD-3-Clause-No-Nuclear-Warranty",
                "BSD-3-Clause-Open-MPI",
                "BSD-3-Clause-Sun",
                "BSD-4-Clause",
                "BSD-4-Clause-Shortened",
                "BSD-4-Clause-UC",
                "BSD-4.3RENO",
                "BSD-4.3TAHOE",
                "BSD-Advertising-Acknowledgement",
                "BSD-Attribution-HPND-disclaimer",
                "BSD-Inferno-Nettverk",
                "BSD-Protection",
                "BSD-Source-beginning-file",
                "BSD-Source-Code",
                "BSD-Systemics",
                "BSD-Systemics-W3Works",
                "BSL-1.0",
                "BUSL-1.1",
                "bzip2-1.0.6",
                "C-UDA-1.0",
                "CAL-1.0",
                "CAL-1.0-Combined-Work-Exception",
                "Caldera",
                "Caldera-no-preamble",
                "Catharon",
                "CATOSL-1.1",
                "CC-BY-1.0",
                "CC-BY-2.0",
                "CC-BY-2.5",
                "CC-BY-2.5-AU",
                "CC-BY-3.0",
                "CC-BY-3.0-AT",
                "CC-BY-3.0-AU",
                "CC-BY-3.0-DE",
                "CC-BY-3.0-IGO",
                "CC-BY-3.0-NL",
                "CC-BY-3.0-US",
                "CC-BY-4.0",
                "CC-BY-NC-1.0",
                "CC-BY-NC-2.0",
                "CC-BY-NC-2.5",
                "CC-BY-NC-3.0",
                "CC-BY-NC-3.0-DE",
                "CC-BY-NC-4.0",
                "CC-BY-NC-ND-1.0",
                "CC-BY-NC-ND-2.0",
                "CC-BY-NC-ND-2.5",
                "CC-BY-NC-ND-3.0",
                "CC-BY-NC-ND-3.0-DE",
                "CC-BY-NC-ND-3.0-IGO",
                "CC-BY-NC-ND-4.0",
                "CC-BY-NC-SA-1.0",
                "CC-BY-NC-SA-2.0",
                "CC-BY-NC-SA-2.0-DE",
                "CC-BY-NC-SA-2.0-FR",
                "CC-BY-NC-SA-2.0-UK",
                "CC-BY-NC-SA-2.5",
                "CC-BY-NC-SA-3.0",
                "CC-BY-NC-SA-3.0-DE",
                "CC-BY-NC-SA-3.0-IGO",
                "CC-BY-NC-SA-4.0",
                "CC-BY-ND-1.0",
                "CC-BY-ND-2.0",
                "CC-BY-ND-2.5",
                "CC-BY-ND-3.0",
                "CC-BY-ND-3.0-DE",
                "CC-BY-ND-4.0",
                "CC-BY-SA-1.0",
                "CC-BY-SA-2.0",
                "CC-BY-SA-2.0-UK",
                "CC-BY-SA-2.1-JP",
                "CC-BY-SA-2.5",
                "CC-BY-SA-3.0",
                "CC-BY-SA-3.0-AT",
                "CC-BY-SA-3.0-DE",
                "CC-BY-SA-3.0-IGO",
                "CC-BY-SA-4.0",
                "CC-PDDC",
                "CC-PDM-1.0",
                "CC-SA-1.0",
                "CC0-1.0",
                "CDDL-1.0",
                "CDDL-1.1",
                "CDL-1.0",
                "CDLA-Permissive-1.0",
                "CDLA-Permissive-2.0",
                "CDLA-Sharing-1.0",
                "CECILL-1.0",
                "CECILL-1.1",
                "CECILL-2.0",
                "CECILL-2.1",
                "CECILL-B",
                "CECILL-C",
                "CERN-OHL-1.1",
                "CERN-OHL-1.2",
                "CERN-OHL-P-2.0",
                "CERN-OHL-S-2.0",
                "CERN-OHL-W-2.0",
                "CFITSIO",
                "check-cvs",
                "checkmk",
                "ClArtistic",
                "Clips",
                "CMU-Mach",
                "CMU-Mach-nodoc",
                "CNRI-Jython",
                "CNRI-Python",
                "CNRI-Python-GPL-Compatible",
                "COIL-1.0",
                "Community-Spec-1.0",
                "Condor-1.1",
                "copyleft-next-0.3.0",
                "copyleft-next-0.3.1",
                "Cornell-Lossless-JPEG",
                "CPAL-1.0",
                "CPL-1.0",
                "CPOL-1.02",
                "Cronyx",
                "Crossword",
                "CryptoSwift",
                "CrystalStacker",
                "CUA-OPL-1.0",
                "Cube",
                "curl",
                "cve-tou",
                "D-FSL-1.0",
                "DEC-3-Clause",
                "diffmark",
                "DL-DE-BY-2.0",
                "DL-DE-ZERO-2.0",
                "DOC",
                "DocBook-DTD",
                "DocBook-Schema",
                "DocBook-Stylesheet",
                "DocBook-XML",
                "Dotseqn",
                "DRL-1.0",
                "DRL-1.1",
                "DSDP",
                "dtoa",
                "dvipdfm",
                "ECL-1.0",
                "ECL-2.0",
                "EFL-1.0",
                "EFL-2.0",
                "eGenix",
                "Elastic-2.0",
                "Entessa",
                "EPICS",
                "EPL-1.0",
                "EPL-2.0",
                "ErlPL-1.1",
                "etalab-2.0",
                "EUDatagrid",
                "EUPL-1.0",
                "EUPL-1.1",
                "EUPL-1.2",
                "Eurosym",
                "Fair",
                "FBM",
                "FDK-AAC",
                "Ferguson-Twofish",
                "Frameworx-1.0",
                "FreeBSD-DOC",
                "FreeImage",
                "FSFAP",
                "FSFAP-no-warranty-disclaimer",
                "FSFUL",
                "FSFULLR",
                "FSFULLRSD",
                "FSFULLRWD",
                "FSL-1.1-ALv2",
                "FSL-1.1-MIT",
                "FTL",
                "Furuseth",
                "fwlw",
                "Game-Programming-Gems",
                "GCR-docs",
                "GD",
                "generic-xts",
                "GFDL-1.1-invariants-only",
                "GFDL-1.1-invariants-or-later",
                "GFDL-1.1-no-invariants-only",
                "GFDL-1.1-no-invariants-or-later",
                "GFDL-1.1-only",
                "GFDL-1.1-or-later",
                "GFDL-1.2-invariants-only",
                "GFDL-1.2-invariants-or-later",
                "GFDL-1.2-no-invariants-only",
                "GFDL-1.2-no-invariants-or-later",
                "GFDL-1.2-only",
                "GFDL-1.2-or-later",
                "GFDL-1.3-invariants-only",
                "GFDL-1.3-invariants-or-later",
                "GFDL-1.3-no-invariants-only",
                "GFDL-1.3-no-invariants-or-later",
                "GFDL-1.3-only",
                "GFDL-1.3-or-later",
                "Giftware",
                "GL2PS",
                "Glide",
                "Glulxe",
                "GLWTPL",
                "gnuplot",
                "GPL-1.0-only",
                "GPL-1.0-or-later",
                "GPL-2.0-only",
                "GPL-2.0-or-later",
                "GPL-3.0-only",
                "GPL-3.0-or-later",
                "Graphics-Gems",
                "gSOAP-1.3b",
                "gtkbook",
                "Gutmann",
                "HaskellReport",
                "HDF5",
                "hdparm",
                "HIDAPI",
                "Hippocratic-2.1",
                "HP-1986",
                "HP-1989",
                "HPND",
                "HPND-DEC",
                "HPND-doc",
                "HPND-doc-sell",
                "HPND-export-US",
                "HPND-export-US-acknowledgement",
                "HPND-export-US-modify",
                "HPND-export2-US",
                "HPND-Fenneberg-Livingston",
                "HPND-INRIA-IMAG",
                "HPND-Intel",
                "HPND-Kevlin-Henney",
                "HPND-Markus-Kuhn",
                "HPND-merchantability-variant",
                "HPND-MIT-disclaimer",
                "HPND-Netrek",
                "HPND-Pbmplus",
                "HPND-sell-MIT-disclaimer-xserver",
                "HPND-sell-regexpr",
                "HPND-sell-variant",
                "HPND-sell-variant-MIT-disclaimer",
                "HPND-sell-variant-MIT-disclaimer-rev",
                "HPND-UC",
                "HPND-UC-export-US",
                "HTMLTIDY",
                "IBM-pibs",
                "ICU",
                "IEC-Code-Components-EULA",
                "IJG",
                "IJG-short",
                "ImageMagick",
                "iMatix",
                "Imlib2",
                "Info-ZIP",
                "Inner-Net-2.0",
                "InnoSetup",
                "Intel",
                "Intel-ACPI",
                "Interbase-1.0",
                "IPA",
                "IPL-1.0",
                "ISC",
                "ISC-Veillard",
                "Jam",
                "JasPer-2.0",
                "jove",
                "JPL-image",
                "JPNIC",
                "JSON",
                "Kastrup",
                "Kazlib",
                "Knuth-CTAN",
                "LAL-1.2",
                "LAL-1.3",
                "Latex2e",
                "Latex2e-translated-notice",
                "Leptonica",
                "LGPL-2.0-only",
                "LGPL-2.0-or-later",
                "LGPL-2.1-only",
                "LGPL-2.1-or-later",
                "LGPL-3.0-only",
                "LGPL-3.0-or-later",
                "LGPLLR",
                "Libpng",
                "libpng-1.6.35",
                "libpng-2.0",
                "libselinux-1.0",
                "libtiff",
                "libutil-David-Nugent",
                "LiLiQ-P-1.1",
                "LiLiQ-R-1.1",
                "LiLiQ-Rplus-1.1",
                "Linux-man-pages-1-para",
                "Linux-man-pages-copyleft",
                "Linux-man-pages-copyleft-2-para",
                "Linux-man-pages-copyleft-var",
                "Linux-OpenIB",
                "LOOP",
                "LPD-document",
                "LPL-1.0",
                "LPL-1.02",
                "LPPL-1.0",
                "LPPL-1.1",
                "LPPL-1.2",
                "LPPL-1.3a",
                "LPPL-1.3c",
                "lsof",
                "Lucida-Bitmap-Fonts",
                "LZMA-SDK-9.11-to-9.20",
                "LZMA-SDK-9.22",
                "Mackerras-3-Clause",
                "Mackerras-3-Clause-acknowledgment",
                "magaz",
                "mailprio",
                "MakeIndex",
                "man2html",
                "Martin-Birgmeier",
                "McPhee-slideshow",
                "metamail",
                "Minpack",
                "MIPS",
                "MirOS",
                "MIT",
                "MIT-0",
                "MIT-advertising",
                "MIT-Click",
                "MIT-CMU",
                "MIT-enna",
                "MIT-feh",
                "MIT-Festival",
                "MIT-Khronos-old",
                "MIT-Modern-Variant",
                "MIT-open-group",
                "MIT-testregex",
                "MIT-Wu",
                "MITNFA",
                "MMIXware",
                "Motosoto",
                "MPEG-SSG",
                "mpi-permissive",
                "mpich2",
                "MPL-1.0",
                "MPL-1.1",
                "MPL-2.0",
                "MPL-2.0-no-copyleft-exception",
                "mplus",
                "MS-LPL",
                "MS-PL",
                "MS-RL",
                "MTLL",
                "MulanPSL-1.0",
                "MulanPSL-2.0",
                "Multics",
                "Mup",
                "NAIST-2003",
                "NASA-1.3",
                "Naumen",
                "NBPL-1.0",
                "NCBI-PD",
                "NCGL-UK-2.0",
                "NCL",
                "NCSA",
                "NetCDF",
                "Newsletr",
                "NGPL",
                "ngrep",
                "NICTA-1.0",
                "NIST-PD",
                "NIST-PD-fallback",
                "NIST-Software",
                "NLOD-1.0",
                "NLOD-2.0",
                "NLPL",
                "Nokia",
                "NOSL",
                "Noweb",
                "NPL-1.0",
                "NPL-1.1",
                "NPOSL-3.0",
                "NRL",
                "NTIA-PD",
                "NTP",
                "NTP-0",
                "O-UDA-1.0",
                "OAR",
                "OCCT-PL",
                "OCLC-2.0",
                "ODbL-1.0",
                "ODC-By-1.0",
                "OFFIS",
                "OFL-1.0",
                "OFL-1.0-no-RFN",
                "OFL-1.0-RFN",
                "OFL-1.1",
                "OFL-1.1-no-RFN",
                "OFL-1.1-RFN",
                "OGC-1.0",
                "OGDL-Taiwan-1.0",
                "OGL-Canada-2.0",
                "OGL-UK-1.0",
                "OGL-UK-2.0",
                "OGL-UK-3.0",
                "OGTSL",
                "OLDAP-1.1",
                "OLDAP-1.2",
                "OLDAP-1.3",
                "OLDAP-1.4",
                "OLDAP-2.0",
                "OLDAP-2.0.1",
                "OLDAP-2.1",
                "OLDAP-2.2",
                "OLDAP-2.2.1",
                "OLDAP-2.2.2",
                "OLDAP-2.3",
                "OLDAP-2.4",
                "OLDAP-2.5",
                "OLDAP-2.6",
                "OLDAP-2.7",
                "OLDAP-2.8",
                "OLFL-1.3",
                "OML",
                "OpenPBS-2.3",
                "OpenSSL",
                "OpenSSL-standalone",
                "OpenVision",
                "OPL-1.0",
                "OPL-UK-3.0",
                "OPUBL-1.0",
                "OSET-PL-2.1",
                "OSL-1.0",
                "OSL-1.1",
                "OSL-2.0",
                "OSL-2.1",
                "OSL-3.0",
                "PADL",
                "Parity-6.0.0",
                "Parity-7.0.0",
                "PDDL-1.0",
                "PHP-3.0",
                "PHP-3.01",
                "Pixar",
                "pkgconf",
                "Plexus",
                "pnmstitch",
                "PolyForm-Noncommercial-1.0.0",
                "PolyForm-Small-Business-1.0.0",
                "PostgreSQL",
                "PPL",
                "PSF-2.0",
                "psfrag",
                "psutils",
                "Python-2.0",
                "Python-2.0.1",
                "python-ldap",
                "Qhull",
                "QPL-1.0",
                "QPL-1.0-INRIA-2004",
                "radvd",
                "Rdisc",
                "RHeCos-1.1",
                "RPL-1.1",
                "RPL-1.5",
                "RPSL-1.0",
                "RSA-MD",
                "RSCPL",
                "Ruby",
                "Ruby-pty",
                "SAX-PD",
                "SAX-PD-2.0",
                "Saxpath",
                "SCEA",
                "SchemeReport",
                "Sendmail",
                "Sendmail-8.23",
                "Sendmail-Open-Source-1.1",
                "SGI-B-1.0",
                "SGI-B-1.1",
                "SGI-B-2.0",
                "SGI-OpenGL",
                "SGP4",
                "SHL-0.5",
                "SHL-0.51",
                "SimPL-2.0",
                "SISSL",
                "SISSL-1.2",
                "SL",
                "Sleepycat",
                "SMAIL-GPL",
                "SMLNJ",
                "SMPPL",
                "SNIA",
                "snprintf",
                "SOFA",
                "softSurfer",
                "Soundex",
                "Spencer-86",
                "Spencer-94",
                "Spencer-99",
                "SPL-1.0",
                "ssh-keyscan",
                "SSH-OpenSSH",
                "SSH-short",
                "SSLeay-standalone",
                "SSPL-1.0",
                "SugarCRM-1.1.3",
                "SUL-1.0",
                "Sun-PPP",
                "Sun-PPP-2000",
                "SunPro",
                "SWL",
                "swrule",
                "Symlinks",
                "TAPR-OHL-1.0",
                "TCL",
                "TCP-wrappers",
                "TermReadKey",
                "TGPPL-1.0",
                "ThirdEye",
                "threeparttable",
                "TMate",
                "TORQUE-1.1",
                "TOSL",
                "TPDL",
                "TPL-1.0",
                "TrustedQSL",
                "TTWL",
                "TTYP0",
                "TU-Berlin-1.0",
                "TU-Berlin-2.0",
                "Ubuntu-font-1.0",
                "UCAR",
                "UCL-1.0",
                "ulem",
                "UMich-Merit",
                "Unicode-3.0",
                "Unicode-DFS-2015",
                "Unicode-DFS-2016",
                "Unicode-TOU",
                "UnixCrypt",
                "Unlicense",
                "Unlicense-libtelnet",
                "Unlicense-libwhirlpool",
                "UPL-1.0",
                "URT-RLE",
                "Vim",
                "VOSTROM",
                "VSL-1.0",
                "W3C",
                "W3C-19980720",
                "W3C-20150513",
                "w3m",
                "Watcom-1.0",
                "Widget-Workshop",
                "Wsuipa",
                "WTFPL",
                "wwl",
                "X11",
                "X11-distribute-modifications-variant",
                "X11-swapped",
                "Xdebug-1.03",
                "Xerox",
                "Xfig",
                "XFree86-1.1",
                "xinetd",
                "xkeyboard-config-Zinoviev",
                "xlock",
                "Xnet",
                "xpp",
                "XSkat",
                "xzoom",
                "YPL-1.0",
                "YPL-1.1",
                "Zed",
                "Zeeff",
                "Zend-2.0",
                "Zimbra-1.3",
                "Zimbra-1.4",
                "Zlib",
                "zlib-acknowledgement",
                "ZPL-1.1",
                "ZPL-2.0",
                "ZPL-2.1"
              ],
              "title": "LicenseId",
              "type": "string"
            },
            {
              "enum": [
                "AGPL-1.0",
                "AGPL-3.0",
                "BSD-2-Clause-FreeBSD",
                "BSD-2-Clause-NetBSD",
                "bzip2-1.0.5",
                "eCos-2.0",
                "GFDL-1.1",
                "GFDL-1.2",
                "GFDL-1.3",
                "GPL-1.0",
                "GPL-1.0+",
                "GPL-2.0",
                "GPL-2.0+",
                "GPL-2.0-with-autoconf-exception",
                "GPL-2.0-with-bison-exception",
                "GPL-2.0-with-classpath-exception",
                "GPL-2.0-with-font-exception",
                "GPL-2.0-with-GCC-exception",
                "GPL-3.0",
                "GPL-3.0+",
                "GPL-3.0-with-autoconf-exception",
                "GPL-3.0-with-GCC-exception",
                "LGPL-2.0",
                "LGPL-2.0+",
                "LGPL-2.1",
                "LGPL-2.1+",
                "LGPL-3.0",
                "LGPL-3.0+",
                "Net-SNMP",
                "Nunit",
                "StandardML-NJ",
                "wxWindows"
              ],
              "title": "DeprecatedLicenseId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose)\nto discuss your intentions with the community.",
          "examples": [
            "CC0-1.0",
            "MIT",
            "BSD-2-Clause"
          ],
          "title": "License"
        },
        "git_repo": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "A URL to the Git repository where the resource is being developed.",
          "examples": [
            "https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
          ],
          "title": "Git Repo"
        },
        "icon": {
          "anyOf": [
            {
              "maxLength": 2,
              "minLength": 1,
              "type": "string"
            },
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An icon for illustration, e.g. on bioimage.io",
          "title": "Icon"
        },
        "links": {
          "description": "IDs of other bioimage.io resources",
          "examples": [
            [
              "ilastik/ilastik",
              "deepimagej/deepimagej",
              "zero/notebook_u-net_3d_zerocostdl4mic"
            ]
          ],
          "items": {
            "type": "string"
          },
          "title": "Links",
          "type": "array"
        },
        "uploader": {
          "anyOf": [
            {
              "$ref": "#/$defs/Uploader"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The person who uploaded the model (e.g. to bioimage.io)"
        },
        "maintainers": {
          "description": "Maintainers of this resource.\nIf not specified, `authors` are maintainers and at least some of them has to specify their `github_user` name",
          "items": {
            "$ref": "#/$defs/bioimageio__spec__generic__v0_3__Maintainer"
          },
          "title": "Maintainers",
          "type": "array"
        },
        "tags": {
          "description": "Associated tags",
          "examples": [
            [
              "unet2d",
              "pytorch",
              "nucleus",
              "segmentation",
              "dsb2018"
            ]
          ],
          "items": {
            "type": "string"
          },
          "title": "Tags",
          "type": "array"
        },
        "version": {
          "anyOf": [
            {
              "$ref": "#/$defs/Version"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The version of the resource following SemVer 2.0."
        },
        "version_comment": {
          "anyOf": [
            {
              "maxLength": 512,
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "A comment on the version of the resource.",
          "title": "Version Comment"
        },
        "format_version": {
          "const": "0.3.0",
          "description": "The **format** version of this resource specification",
          "title": "Format Version",
          "type": "string"
        },
        "documentation": {
          "anyOf": [
            {
              "anyOf": [
                {
                  "description": "A URL with the HTTP or HTTPS scheme.",
                  "format": "uri",
                  "maxLength": 2083,
                  "minLength": 1,
                  "title": "HttpUrl",
                  "type": "string"
                },
                {
                  "$ref": "#/$defs/RelativeFilePath"
                },
                {
                  "format": "file-path",
                  "title": "FilePath",
                  "type": "string"
                }
              ],
              "examples": [
                "https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
                "README.md"
              ]
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "URL or relative path to a markdown file encoded in UTF-8 with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.",
          "title": "Documentation"
        },
        "badges": {
          "description": "badges associated with this resource",
          "items": {
            "$ref": "#/$defs/BadgeDescr"
          },
          "title": "Badges",
          "type": "array"
        },
        "config": {
          "$ref": "#/$defs/bioimageio__spec__generic__v0_3__Config",
          "description": "A field for custom configuration that can contain any keys not present in the RDF spec.\nThis means you should not store, for example, a GitHub repo URL in `config` since there is a `git_repo` field.\nKeys in `config` may be very specific to a tool or consumer software. To avoid conflicting definitions,\nit is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,\nfor example:\n```yaml\nconfig:\n    giraffe_neckometer:  # here is the domain name\n        length: 3837283\n        address:\n            home: zoo\n    imagej:              # config specific to ImageJ\n        macro_dir: path/to/macro/file\n```\nIf possible, please use [`snake_case`](https://en.wikipedia.org/wiki/Snake_case) for keys in `config`.\nYou may want to list linked files additionally under `attachments` to include them when packaging a resource.\n(Packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains\nan altered rdf.yaml file with local references to the downloaded files.)"
        },
        "type": {
          "const": "dataset",
          "title": "Type",
          "type": "string"
        },
        "id": {
          "anyOf": [
            {
              "minLength": 1,
              "title": "DatasetId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
          "title": "Id"
        },
        "parent": {
          "anyOf": [
            {
              "minLength": 1,
              "title": "DatasetId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The description from which this one is derived",
          "title": "Parent"
        },
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "\"URL to the source of the dataset.",
          "title": "Source"
        }
      },
      "required": [
        "name",
        "format_version",
        "type"
      ],
      "title": "dataset 0.3.0",
      "type": "object"
    },
    "bioimageio__spec__generic__v0_2__Author": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "title": "Name",
          "type": "string"
        },
        "github_user": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Github User"
        }
      },
      "required": [
        "name"
      ],
      "title": "generic.v0_2.Author",
      "type": "object"
    },
    "bioimageio__spec__generic__v0_2__CiteEntry": {
      "additionalProperties": false,
      "properties": {
        "text": {
          "description": "free text description",
          "title": "Text",
          "type": "string"
        },
        "doi": {
          "anyOf": [
            {
              "description": "A digital object identifier, see https://www.doi.org/",
              "pattern": "^10\\.[0-9]{4}.+$",
              "title": "Doi",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details. (alternatively specify `url`)",
          "title": "Doi"
        },
        "url": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "URL to cite (preferably specify a `doi` instead)",
          "title": "Url"
        }
      },
      "required": [
        "text"
      ],
      "title": "generic.v0_2.CiteEntry",
      "type": "object"
    },
    "bioimageio__spec__generic__v0_2__Maintainer": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Name"
        },
        "github_user": {
          "title": "Github User",
          "type": "string"
        }
      },
      "required": [
        "github_user"
      ],
      "title": "generic.v0_2.Maintainer",
      "type": "object"
    },
    "bioimageio__spec__generic__v0_3__Author": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "title": "Name",
          "type": "string"
        },
        "github_user": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Github User"
        }
      },
      "required": [
        "name"
      ],
      "title": "generic.v0_3.Author",
      "type": "object"
    },
    "bioimageio__spec__generic__v0_3__BioimageioConfig": {
      "additionalProperties": true,
      "description": "bioimage.io internal metadata.",
      "properties": {},
      "title": "generic.v0_3.BioimageioConfig",
      "type": "object"
    },
    "bioimageio__spec__generic__v0_3__CiteEntry": {
      "additionalProperties": false,
      "description": "A citation that should be referenced in work using this resource.",
      "properties": {
        "text": {
          "description": "free text description",
          "title": "Text",
          "type": "string"
        },
        "doi": {
          "anyOf": [
            {
              "description": "A digital object identifier, see https://www.doi.org/",
              "pattern": "^10\\.[0-9]{4}.+$",
              "title": "Doi",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details.\nNote:\n    Either **doi** or **url** have to be specified.",
          "title": "Doi"
        },
        "url": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "URL to cite (preferably specify a **doi** instead/also).\nNote:\n    Either **doi** or **url** have to be specified.",
          "title": "Url"
        }
      },
      "required": [
        "text"
      ],
      "title": "generic.v0_3.CiteEntry",
      "type": "object"
    },
    "bioimageio__spec__generic__v0_3__Config": {
      "additionalProperties": true,
      "description": "A place to store additional metadata (often tool specific).\n\nSuch additional metadata is typically set programmatically by the respective tool\nor by people with specific insights into the tool.\nIf you want to store additional metadata that does not match any of the other\nfields, think of a key unlikely to collide with anyone elses use-case/tool and save\nit here.\n\nPlease consider creating [an issue in the bioimageio.spec repository](https://github.com/bioimage-io/spec-bioimage-io/issues/new?template=Blank+issue)\nif you are not sure if an existing field could cover your use case\nor if you think such a field should exist.",
      "properties": {
        "bioimageio": {
          "$ref": "#/$defs/bioimageio__spec__generic__v0_3__BioimageioConfig"
        }
      },
      "title": "generic.v0_3.Config",
      "type": "object"
    },
    "bioimageio__spec__generic__v0_3__Maintainer": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Name"
        },
        "github_user": {
          "title": "Github User",
          "type": "string"
        }
      },
      "required": [
        "github_user"
      ],
      "title": "generic.v0_3.Maintainer",
      "type": "object"
    },
    "bioimageio__spec__model__v0_5__BioimageioConfig": {
      "additionalProperties": true,
      "properties": {
        "reproducibility_tolerance": {
          "default": [],
          "description": "Tolerances to allow when reproducing the model's test outputs\nfrom the model's test inputs.\nOnly the first entry matching tensor id and weights format is considered.",
          "items": {
            "$ref": "#/$defs/ReproducibilityTolerance"
          },
          "title": "Reproducibility Tolerance",
          "type": "array"
        },
        "funded_by": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Funding agency, grant number if applicable",
          "title": "Funded By"
        },
        "architecture_type": {
          "anyOf": [
            {
              "maxLength": 32,
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Model architecture type, e.g., 3D U-Net, ResNet, transformer",
          "title": "Architecture Type"
        },
        "architecture_description": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Text description of model architecture.",
          "title": "Architecture Description"
        },
        "modality": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Input modality, e.g., fluorescence microscopy, electron microscopy",
          "title": "Modality"
        },
        "target_structure": {
          "description": "Biological structure(s) the model is designed to analyze, e.g., nuclei, mitochondria, cells",
          "items": {
            "type": "string"
          },
          "title": "Target Structure",
          "type": "array"
        },
        "task": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Bioimage-specific task type, e.g., segmentation, classification, detection, denoising",
          "title": "Task"
        },
        "new_version": {
          "anyOf": [
            {
              "minLength": 1,
              "title": "ModelId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "A new version of this model exists with a different model id.",
          "title": "New Version"
        },
        "out_of_scope_use": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Describe how the model may be misused in bioimage analysis contexts and what users should **not** do with the model.",
          "title": "Out Of Scope Use"
        },
        "bias_risks_limitations": {
          "$ref": "#/$defs/BiasRisksLimitations",
          "description": "Description of known bias, risks, and technical limitations for in-scope model use."
        },
        "model_parameter_count": {
          "anyOf": [
            {
              "type": "integer"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Total number of model parameters.",
          "title": "Model Parameter Count"
        },
        "training": {
          "$ref": "#/$defs/TrainingDetails",
          "description": "Details on how the model was trained."
        },
        "inference_time": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Average inference time per image/tile. Specify hardware and image size. Multiple examples can be given.",
          "title": "Inference Time"
        },
        "memory_requirements_inference": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "GPU memory needed for inference. Multiple examples with different image size can be given.",
          "title": "Memory Requirements Inference"
        },
        "memory_requirements_training": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "GPU memory needed for training. Multiple examples with different image/batch sizes can be given.",
          "title": "Memory Requirements Training"
        },
        "evaluations": {
          "description": "Quantitative model evaluations.\n\nNote:\n    At the moment we recommend to include only a single test dataset\n    (with evaluation factors that may mark subsets of the dataset)\n    to avoid confusion and make the presentation of results cleaner.",
          "items": {
            "$ref": "#/$defs/Evaluation"
          },
          "title": "Evaluations",
          "type": "array"
        },
        "environmental_impact": {
          "$ref": "#/$defs/EnvironmentalImpact",
          "description": "Environmental considerations for model training and deployment"
        }
      },
      "title": "model.v0_5.BioimageioConfig",
      "type": "object"
    },
    "bioimageio__spec__model__v0_5__Config": {
      "additionalProperties": true,
      "properties": {
        "bioimageio": {
          "$ref": "#/$defs/bioimageio__spec__model__v0_5__BioimageioConfig"
        },
        "stardist": {
          "$ref": "#/$defs/YamlValue",
          "default": null
        }
      },
      "title": "model.v0_5.Config",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "description": "Specification of the fields used in a bioimage.io-compliant RDF to describe AI models with pretrained weights.\nThese fields are typically stored in a YAML file which we call a model resource description file (model RDF).",
  "properties": {
    "name": {
      "description": "A human-readable name of this model.\nIt should be no longer than 64 characters\nand may only contain letter, number, underscore, minus, parentheses and spaces.\nWe recommend to chose a name that refers to the model's task and image modality.",
      "maxLength": 128,
      "minLength": 5,
      "title": "Name",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A string containing a brief description.",
      "maxLength": 1024,
      "title": "Description",
      "type": "string"
    },
    "covers": {
      "description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1 or 1:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg')",
      "examples": [
        [
          "cover.png"
        ]
      ],
      "items": {
        "anyOf": [
          {
            "description": "A URL with the HTTP or HTTPS scheme.",
            "format": "uri",
            "maxLength": 2083,
            "minLength": 1,
            "title": "HttpUrl",
            "type": "string"
          },
          {
            "$ref": "#/$defs/RelativeFilePath"
          },
          {
            "format": "file-path",
            "title": "FilePath",
            "type": "string"
          }
        ]
      },
      "title": "Covers",
      "type": "array"
    },
    "id_emoji": {
      "anyOf": [
        {
          "examples": [
            "\ud83e\udd88",
            "\ud83e\udda5"
          ],
          "maxLength": 2,
          "minLength": 1,
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "UTF-8 emoji for display alongside the `id`.",
      "title": "Id Emoji"
    },
    "authors": {
      "description": "The authors are the creators of the model RDF and the primary points of contact.",
      "items": {
        "$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
      },
      "title": "Authors",
      "type": "array"
    },
    "attachments": {
      "description": "file attachments",
      "items": {
        "$ref": "#/$defs/FileDescr"
      },
      "title": "Attachments",
      "type": "array"
    },
    "cite": {
      "description": "citations",
      "items": {
        "$ref": "#/$defs/bioimageio__spec__generic__v0_3__CiteEntry"
      },
      "title": "Cite",
      "type": "array"
    },
    "license": {
      "anyOf": [
        {
          "enum": [
            "0BSD",
            "3D-Slicer-1.0",
            "AAL",
            "Abstyles",
            "AdaCore-doc",
            "Adobe-2006",
            "Adobe-Display-PostScript",
            "Adobe-Glyph",
            "Adobe-Utopia",
            "ADSL",
            "AFL-1.1",
            "AFL-1.2",
            "AFL-2.0",
            "AFL-2.1",
            "AFL-3.0",
            "Afmparse",
            "AGPL-1.0-only",
            "AGPL-1.0-or-later",
            "AGPL-3.0-only",
            "AGPL-3.0-or-later",
            "Aladdin",
            "AMD-newlib",
            "AMDPLPA",
            "AML",
            "AML-glslang",
            "AMPAS",
            "ANTLR-PD",
            "ANTLR-PD-fallback",
            "any-OSI",
            "any-OSI-perl-modules",
            "Apache-1.0",
            "Apache-1.1",
            "Apache-2.0",
            "APAFML",
            "APL-1.0",
            "App-s2p",
            "APSL-1.0",
            "APSL-1.1",
            "APSL-1.2",
            "APSL-2.0",
            "Arphic-1999",
            "Artistic-1.0",
            "Artistic-1.0-cl8",
            "Artistic-1.0-Perl",
            "Artistic-2.0",
            "Artistic-dist",
            "Aspell-RU",
            "ASWF-Digital-Assets-1.0",
            "ASWF-Digital-Assets-1.1",
            "Baekmuk",
            "Bahyph",
            "Barr",
            "bcrypt-Solar-Designer",
            "Beerware",
            "Bitstream-Charter",
            "Bitstream-Vera",
            "BitTorrent-1.0",
            "BitTorrent-1.1",
            "blessing",
            "BlueOak-1.0.0",
            "Boehm-GC",
            "Boehm-GC-without-fee",
            "Borceux",
            "Brian-Gladman-2-Clause",
            "Brian-Gladman-3-Clause",
            "BSD-1-Clause",
            "BSD-2-Clause",
            "BSD-2-Clause-Darwin",
            "BSD-2-Clause-first-lines",
            "BSD-2-Clause-Patent",
            "BSD-2-Clause-pkgconf-disclaimer",
            "BSD-2-Clause-Views",
            "BSD-3-Clause",
            "BSD-3-Clause-acpica",
            "BSD-3-Clause-Attribution",
            "BSD-3-Clause-Clear",
            "BSD-3-Clause-flex",
            "BSD-3-Clause-HP",
            "BSD-3-Clause-LBNL",
            "BSD-3-Clause-Modification",
            "BSD-3-Clause-No-Military-License",
            "BSD-3-Clause-No-Nuclear-License",
            "BSD-3-Clause-No-Nuclear-License-2014",
            "BSD-3-Clause-No-Nuclear-Warranty",
            "BSD-3-Clause-Open-MPI",
            "BSD-3-Clause-Sun",
            "BSD-4-Clause",
            "BSD-4-Clause-Shortened",
            "BSD-4-Clause-UC",
            "BSD-4.3RENO",
            "BSD-4.3TAHOE",
            "BSD-Advertising-Acknowledgement",
            "BSD-Attribution-HPND-disclaimer",
            "BSD-Inferno-Nettverk",
            "BSD-Protection",
            "BSD-Source-beginning-file",
            "BSD-Source-Code",
            "BSD-Systemics",
            "BSD-Systemics-W3Works",
            "BSL-1.0",
            "BUSL-1.1",
            "bzip2-1.0.6",
            "C-UDA-1.0",
            "CAL-1.0",
            "CAL-1.0-Combined-Work-Exception",
            "Caldera",
            "Caldera-no-preamble",
            "Catharon",
            "CATOSL-1.1",
            "CC-BY-1.0",
            "CC-BY-2.0",
            "CC-BY-2.5",
            "CC-BY-2.5-AU",
            "CC-BY-3.0",
            "CC-BY-3.0-AT",
            "CC-BY-3.0-AU",
            "CC-BY-3.0-DE",
            "CC-BY-3.0-IGO",
            "CC-BY-3.0-NL",
            "CC-BY-3.0-US",
            "CC-BY-4.0",
            "CC-BY-NC-1.0",
            "CC-BY-NC-2.0",
            "CC-BY-NC-2.5",
            "CC-BY-NC-3.0",
            "CC-BY-NC-3.0-DE",
            "CC-BY-NC-4.0",
            "CC-BY-NC-ND-1.0",
            "CC-BY-NC-ND-2.0",
            "CC-BY-NC-ND-2.5",
            "CC-BY-NC-ND-3.0",
            "CC-BY-NC-ND-3.0-DE",
            "CC-BY-NC-ND-3.0-IGO",
            "CC-BY-NC-ND-4.0",
            "CC-BY-NC-SA-1.0",
            "CC-BY-NC-SA-2.0",
            "CC-BY-NC-SA-2.0-DE",
            "CC-BY-NC-SA-2.0-FR",
            "CC-BY-NC-SA-2.0-UK",
            "CC-BY-NC-SA-2.5",
            "CC-BY-NC-SA-3.0",
            "CC-BY-NC-SA-3.0-DE",
            "CC-BY-NC-SA-3.0-IGO",
            "CC-BY-NC-SA-4.0",
            "CC-BY-ND-1.0",
            "CC-BY-ND-2.0",
            "CC-BY-ND-2.5",
            "CC-BY-ND-3.0",
            "CC-BY-ND-3.0-DE",
            "CC-BY-ND-4.0",
            "CC-BY-SA-1.0",
            "CC-BY-SA-2.0",
            "CC-BY-SA-2.0-UK",
            "CC-BY-SA-2.1-JP",
            "CC-BY-SA-2.5",
            "CC-BY-SA-3.0",
            "CC-BY-SA-3.0-AT",
            "CC-BY-SA-3.0-DE",
            "CC-BY-SA-3.0-IGO",
            "CC-BY-SA-4.0",
            "CC-PDDC",
            "CC-PDM-1.0",
            "CC-SA-1.0",
            "CC0-1.0",
            "CDDL-1.0",
            "CDDL-1.1",
            "CDL-1.0",
            "CDLA-Permissive-1.0",
            "CDLA-Permissive-2.0",
            "CDLA-Sharing-1.0",
            "CECILL-1.0",
            "CECILL-1.1",
            "CECILL-2.0",
            "CECILL-2.1",
            "CECILL-B",
            "CECILL-C",
            "CERN-OHL-1.1",
            "CERN-OHL-1.2",
            "CERN-OHL-P-2.0",
            "CERN-OHL-S-2.0",
            "CERN-OHL-W-2.0",
            "CFITSIO",
            "check-cvs",
            "checkmk",
            "ClArtistic",
            "Clips",
            "CMU-Mach",
            "CMU-Mach-nodoc",
            "CNRI-Jython",
            "CNRI-Python",
            "CNRI-Python-GPL-Compatible",
            "COIL-1.0",
            "Community-Spec-1.0",
            "Condor-1.1",
            "copyleft-next-0.3.0",
            "copyleft-next-0.3.1",
            "Cornell-Lossless-JPEG",
            "CPAL-1.0",
            "CPL-1.0",
            "CPOL-1.02",
            "Cronyx",
            "Crossword",
            "CryptoSwift",
            "CrystalStacker",
            "CUA-OPL-1.0",
            "Cube",
            "curl",
            "cve-tou",
            "D-FSL-1.0",
            "DEC-3-Clause",
            "diffmark",
            "DL-DE-BY-2.0",
            "DL-DE-ZERO-2.0",
            "DOC",
            "DocBook-DTD",
            "DocBook-Schema",
            "DocBook-Stylesheet",
            "DocBook-XML",
            "Dotseqn",
            "DRL-1.0",
            "DRL-1.1",
            "DSDP",
            "dtoa",
            "dvipdfm",
            "ECL-1.0",
            "ECL-2.0",
            "EFL-1.0",
            "EFL-2.0",
            "eGenix",
            "Elastic-2.0",
            "Entessa",
            "EPICS",
            "EPL-1.0",
            "EPL-2.0",
            "ErlPL-1.1",
            "etalab-2.0",
            "EUDatagrid",
            "EUPL-1.0",
            "EUPL-1.1",
            "EUPL-1.2",
            "Eurosym",
            "Fair",
            "FBM",
            "FDK-AAC",
            "Ferguson-Twofish",
            "Frameworx-1.0",
            "FreeBSD-DOC",
            "FreeImage",
            "FSFAP",
            "FSFAP-no-warranty-disclaimer",
            "FSFUL",
            "FSFULLR",
            "FSFULLRSD",
            "FSFULLRWD",
            "FSL-1.1-ALv2",
            "FSL-1.1-MIT",
            "FTL",
            "Furuseth",
            "fwlw",
            "Game-Programming-Gems",
            "GCR-docs",
            "GD",
            "generic-xts",
            "GFDL-1.1-invariants-only",
            "GFDL-1.1-invariants-or-later",
            "GFDL-1.1-no-invariants-only",
            "GFDL-1.1-no-invariants-or-later",
            "GFDL-1.1-only",
            "GFDL-1.1-or-later",
            "GFDL-1.2-invariants-only",
            "GFDL-1.2-invariants-or-later",
            "GFDL-1.2-no-invariants-only",
            "GFDL-1.2-no-invariants-or-later",
            "GFDL-1.2-only",
            "GFDL-1.2-or-later",
            "GFDL-1.3-invariants-only",
            "GFDL-1.3-invariants-or-later",
            "GFDL-1.3-no-invariants-only",
            "GFDL-1.3-no-invariants-or-later",
            "GFDL-1.3-only",
            "GFDL-1.3-or-later",
            "Giftware",
            "GL2PS",
            "Glide",
            "Glulxe",
            "GLWTPL",
            "gnuplot",
            "GPL-1.0-only",
            "GPL-1.0-or-later",
            "GPL-2.0-only",
            "GPL-2.0-or-later",
            "GPL-3.0-only",
            "GPL-3.0-or-later",
            "Graphics-Gems",
            "gSOAP-1.3b",
            "gtkbook",
            "Gutmann",
            "HaskellReport",
            "HDF5",
            "hdparm",
            "HIDAPI",
            "Hippocratic-2.1",
            "HP-1986",
            "HP-1989",
            "HPND",
            "HPND-DEC",
            "HPND-doc",
            "HPND-doc-sell",
            "HPND-export-US",
            "HPND-export-US-acknowledgement",
            "HPND-export-US-modify",
            "HPND-export2-US",
            "HPND-Fenneberg-Livingston",
            "HPND-INRIA-IMAG",
            "HPND-Intel",
            "HPND-Kevlin-Henney",
            "HPND-Markus-Kuhn",
            "HPND-merchantability-variant",
            "HPND-MIT-disclaimer",
            "HPND-Netrek",
            "HPND-Pbmplus",
            "HPND-sell-MIT-disclaimer-xserver",
            "HPND-sell-regexpr",
            "HPND-sell-variant",
            "HPND-sell-variant-MIT-disclaimer",
            "HPND-sell-variant-MIT-disclaimer-rev",
            "HPND-UC",
            "HPND-UC-export-US",
            "HTMLTIDY",
            "IBM-pibs",
            "ICU",
            "IEC-Code-Components-EULA",
            "IJG",
            "IJG-short",
            "ImageMagick",
            "iMatix",
            "Imlib2",
            "Info-ZIP",
            "Inner-Net-2.0",
            "InnoSetup",
            "Intel",
            "Intel-ACPI",
            "Interbase-1.0",
            "IPA",
            "IPL-1.0",
            "ISC",
            "ISC-Veillard",
            "Jam",
            "JasPer-2.0",
            "jove",
            "JPL-image",
            "JPNIC",
            "JSON",
            "Kastrup",
            "Kazlib",
            "Knuth-CTAN",
            "LAL-1.2",
            "LAL-1.3",
            "Latex2e",
            "Latex2e-translated-notice",
            "Leptonica",
            "LGPL-2.0-only",
            "LGPL-2.0-or-later",
            "LGPL-2.1-only",
            "LGPL-2.1-or-later",
            "LGPL-3.0-only",
            "LGPL-3.0-or-later",
            "LGPLLR",
            "Libpng",
            "libpng-1.6.35",
            "libpng-2.0",
            "libselinux-1.0",
            "libtiff",
            "libutil-David-Nugent",
            "LiLiQ-P-1.1",
            "LiLiQ-R-1.1",
            "LiLiQ-Rplus-1.1",
            "Linux-man-pages-1-para",
            "Linux-man-pages-copyleft",
            "Linux-man-pages-copyleft-2-para",
            "Linux-man-pages-copyleft-var",
            "Linux-OpenIB",
            "LOOP",
            "LPD-document",
            "LPL-1.0",
            "LPL-1.02",
            "LPPL-1.0",
            "LPPL-1.1",
            "LPPL-1.2",
            "LPPL-1.3a",
            "LPPL-1.3c",
            "lsof",
            "Lucida-Bitmap-Fonts",
            "LZMA-SDK-9.11-to-9.20",
            "LZMA-SDK-9.22",
            "Mackerras-3-Clause",
            "Mackerras-3-Clause-acknowledgment",
            "magaz",
            "mailprio",
            "MakeIndex",
            "man2html",
            "Martin-Birgmeier",
            "McPhee-slideshow",
            "metamail",
            "Minpack",
            "MIPS",
            "MirOS",
            "MIT",
            "MIT-0",
            "MIT-advertising",
            "MIT-Click",
            "MIT-CMU",
            "MIT-enna",
            "MIT-feh",
            "MIT-Festival",
            "MIT-Khronos-old",
            "MIT-Modern-Variant",
            "MIT-open-group",
            "MIT-testregex",
            "MIT-Wu",
            "MITNFA",
            "MMIXware",
            "Motosoto",
            "MPEG-SSG",
            "mpi-permissive",
            "mpich2",
            "MPL-1.0",
            "MPL-1.1",
            "MPL-2.0",
            "MPL-2.0-no-copyleft-exception",
            "mplus",
            "MS-LPL",
            "MS-PL",
            "MS-RL",
            "MTLL",
            "MulanPSL-1.0",
            "MulanPSL-2.0",
            "Multics",
            "Mup",
            "NAIST-2003",
            "NASA-1.3",
            "Naumen",
            "NBPL-1.0",
            "NCBI-PD",
            "NCGL-UK-2.0",
            "NCL",
            "NCSA",
            "NetCDF",
            "Newsletr",
            "NGPL",
            "ngrep",
            "NICTA-1.0",
            "NIST-PD",
            "NIST-PD-fallback",
            "NIST-Software",
            "NLOD-1.0",
            "NLOD-2.0",
            "NLPL",
            "Nokia",
            "NOSL",
            "Noweb",
            "NPL-1.0",
            "NPL-1.1",
            "NPOSL-3.0",
            "NRL",
            "NTIA-PD",
            "NTP",
            "NTP-0",
            "O-UDA-1.0",
            "OAR",
            "OCCT-PL",
            "OCLC-2.0",
            "ODbL-1.0",
            "ODC-By-1.0",
            "OFFIS",
            "OFL-1.0",
            "OFL-1.0-no-RFN",
            "OFL-1.0-RFN",
            "OFL-1.1",
            "OFL-1.1-no-RFN",
            "OFL-1.1-RFN",
            "OGC-1.0",
            "OGDL-Taiwan-1.0",
            "OGL-Canada-2.0",
            "OGL-UK-1.0",
            "OGL-UK-2.0",
            "OGL-UK-3.0",
            "OGTSL",
            "OLDAP-1.1",
            "OLDAP-1.2",
            "OLDAP-1.3",
            "OLDAP-1.4",
            "OLDAP-2.0",
            "OLDAP-2.0.1",
            "OLDAP-2.1",
            "OLDAP-2.2",
            "OLDAP-2.2.1",
            "OLDAP-2.2.2",
            "OLDAP-2.3",
            "OLDAP-2.4",
            "OLDAP-2.5",
            "OLDAP-2.6",
            "OLDAP-2.7",
            "OLDAP-2.8",
            "OLFL-1.3",
            "OML",
            "OpenPBS-2.3",
            "OpenSSL",
            "OpenSSL-standalone",
            "OpenVision",
            "OPL-1.0",
            "OPL-UK-3.0",
            "OPUBL-1.0",
            "OSET-PL-2.1",
            "OSL-1.0",
            "OSL-1.1",
            "OSL-2.0",
            "OSL-2.1",
            "OSL-3.0",
            "PADL",
            "Parity-6.0.0",
            "Parity-7.0.0",
            "PDDL-1.0",
            "PHP-3.0",
            "PHP-3.01",
            "Pixar",
            "pkgconf",
            "Plexus",
            "pnmstitch",
            "PolyForm-Noncommercial-1.0.0",
            "PolyForm-Small-Business-1.0.0",
            "PostgreSQL",
            "PPL",
            "PSF-2.0",
            "psfrag",
            "psutils",
            "Python-2.0",
            "Python-2.0.1",
            "python-ldap",
            "Qhull",
            "QPL-1.0",
            "QPL-1.0-INRIA-2004",
            "radvd",
            "Rdisc",
            "RHeCos-1.1",
            "RPL-1.1",
            "RPL-1.5",
            "RPSL-1.0",
            "RSA-MD",
            "RSCPL",
            "Ruby",
            "Ruby-pty",
            "SAX-PD",
            "SAX-PD-2.0",
            "Saxpath",
            "SCEA",
            "SchemeReport",
            "Sendmail",
            "Sendmail-8.23",
            "Sendmail-Open-Source-1.1",
            "SGI-B-1.0",
            "SGI-B-1.1",
            "SGI-B-2.0",
            "SGI-OpenGL",
            "SGP4",
            "SHL-0.5",
            "SHL-0.51",
            "SimPL-2.0",
            "SISSL",
            "SISSL-1.2",
            "SL",
            "Sleepycat",
            "SMAIL-GPL",
            "SMLNJ",
            "SMPPL",
            "SNIA",
            "snprintf",
            "SOFA",
            "softSurfer",
            "Soundex",
            "Spencer-86",
            "Spencer-94",
            "Spencer-99",
            "SPL-1.0",
            "ssh-keyscan",
            "SSH-OpenSSH",
            "SSH-short",
            "SSLeay-standalone",
            "SSPL-1.0",
            "SugarCRM-1.1.3",
            "SUL-1.0",
            "Sun-PPP",
            "Sun-PPP-2000",
            "SunPro",
            "SWL",
            "swrule",
            "Symlinks",
            "TAPR-OHL-1.0",
            "TCL",
            "TCP-wrappers",
            "TermReadKey",
            "TGPPL-1.0",
            "ThirdEye",
            "threeparttable",
            "TMate",
            "TORQUE-1.1",
            "TOSL",
            "TPDL",
            "TPL-1.0",
            "TrustedQSL",
            "TTWL",
            "TTYP0",
            "TU-Berlin-1.0",
            "TU-Berlin-2.0",
            "Ubuntu-font-1.0",
            "UCAR",
            "UCL-1.0",
            "ulem",
            "UMich-Merit",
            "Unicode-3.0",
            "Unicode-DFS-2015",
            "Unicode-DFS-2016",
            "Unicode-TOU",
            "UnixCrypt",
            "Unlicense",
            "Unlicense-libtelnet",
            "Unlicense-libwhirlpool",
            "UPL-1.0",
            "URT-RLE",
            "Vim",
            "VOSTROM",
            "VSL-1.0",
            "W3C",
            "W3C-19980720",
            "W3C-20150513",
            "w3m",
            "Watcom-1.0",
            "Widget-Workshop",
            "Wsuipa",
            "WTFPL",
            "wwl",
            "X11",
            "X11-distribute-modifications-variant",
            "X11-swapped",
            "Xdebug-1.03",
            "Xerox",
            "Xfig",
            "XFree86-1.1",
            "xinetd",
            "xkeyboard-config-Zinoviev",
            "xlock",
            "Xnet",
            "xpp",
            "XSkat",
            "xzoom",
            "YPL-1.0",
            "YPL-1.1",
            "Zed",
            "Zeeff",
            "Zend-2.0",
            "Zimbra-1.3",
            "Zimbra-1.4",
            "Zlib",
            "zlib-acknowledgement",
            "ZPL-1.1",
            "ZPL-2.0",
            "ZPL-2.1"
          ],
          "title": "LicenseId",
          "type": "string"
        },
        {
          "enum": [
            "AGPL-1.0",
            "AGPL-3.0",
            "BSD-2-Clause-FreeBSD",
            "BSD-2-Clause-NetBSD",
            "bzip2-1.0.5",
            "eCos-2.0",
            "GFDL-1.1",
            "GFDL-1.2",
            "GFDL-1.3",
            "GPL-1.0",
            "GPL-1.0+",
            "GPL-2.0",
            "GPL-2.0+",
            "GPL-2.0-with-autoconf-exception",
            "GPL-2.0-with-bison-exception",
            "GPL-2.0-with-classpath-exception",
            "GPL-2.0-with-font-exception",
            "GPL-2.0-with-GCC-exception",
            "GPL-3.0",
            "GPL-3.0+",
            "GPL-3.0-with-autoconf-exception",
            "GPL-3.0-with-GCC-exception",
            "LGPL-2.0",
            "LGPL-2.0+",
            "LGPL-2.1",
            "LGPL-2.1+",
            "LGPL-3.0",
            "LGPL-3.0+",
            "Net-SNMP",
            "Nunit",
            "StandardML-NJ",
            "wxWindows"
          ],
          "title": "DeprecatedLicenseId",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose)\nto discuss your intentions with the community.",
      "examples": [
        "CC0-1.0",
        "MIT",
        "BSD-2-Clause"
      ],
      "title": "License"
    },
    "git_repo": {
      "anyOf": [
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "A URL to the Git repository where the resource is being developed.",
      "examples": [
        "https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
      ],
      "title": "Git Repo"
    },
    "icon": {
      "anyOf": [
        {
          "maxLength": 2,
          "minLength": 1,
          "type": "string"
        },
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "$ref": "#/$defs/RelativeFilePath"
        },
        {
          "format": "file-path",
          "title": "FilePath",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "An icon for illustration, e.g. on bioimage.io",
      "title": "Icon"
    },
    "links": {
      "description": "IDs of other bioimage.io resources",
      "examples": [
        [
          "ilastik/ilastik",
          "deepimagej/deepimagej",
          "zero/notebook_u-net_3d_zerocostdl4mic"
        ]
      ],
      "items": {
        "type": "string"
      },
      "title": "Links",
      "type": "array"
    },
    "uploader": {
      "anyOf": [
        {
          "$ref": "#/$defs/Uploader"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The person who uploaded the model (e.g. to bioimage.io)"
    },
    "maintainers": {
      "description": "Maintainers of this resource.\nIf not specified, `authors` are maintainers and at least some of them has to specify their `github_user` name",
      "items": {
        "$ref": "#/$defs/bioimageio__spec__generic__v0_3__Maintainer"
      },
      "title": "Maintainers",
      "type": "array"
    },
    "tags": {
      "description": "Associated tags",
      "examples": [
        [
          "unet2d",
          "pytorch",
          "nucleus",
          "segmentation",
          "dsb2018"
        ]
      ],
      "items": {
        "type": "string"
      },
      "title": "Tags",
      "type": "array"
    },
    "version": {
      "anyOf": [
        {
          "$ref": "#/$defs/Version"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The version of the resource following SemVer 2.0."
    },
    "version_comment": {
      "anyOf": [
        {
          "maxLength": 512,
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "A comment on the version of the resource.",
      "title": "Version Comment"
    },
    "format_version": {
      "const": "0.5.9",
      "description": "Version of the bioimage.io model description specification used.\nWhen creating a new model always use the latest micro/patch version described here.\nThe `format_version` is important for any consumer software to understand how to parse the fields.",
      "title": "Format Version",
      "type": "string"
    },
    "type": {
      "const": "model",
      "description": "Specialized resource type 'model'",
      "title": "Type",
      "type": "string"
    },
    "id": {
      "anyOf": [
        {
          "minLength": 1,
          "title": "ModelId",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
      "title": "Id"
    },
    "documentation": {
      "anyOf": [
        {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "examples": [
            "https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
            "README.md"
          ]
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "URL or relative path to a markdown file with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.\nThe documentation should include a '#[#] Validation' (sub)section\nwith details on how to quantitatively validate the model on unseen data.",
      "title": "Documentation"
    },
    "inputs": {
      "description": "Describes the input tensors expected by this model.",
      "items": {
        "$ref": "#/$defs/InputTensorDescr"
      },
      "minItems": 1,
      "title": "Inputs",
      "type": "array"
    },
    "outputs": {
      "description": "Describes the output tensors.",
      "items": {
        "$ref": "#/$defs/OutputTensorDescr"
      },
      "minItems": 1,
      "title": "Outputs",
      "type": "array"
    },
    "packaged_by": {
      "description": "The persons that have packaged and uploaded this model.\nOnly required if those persons differ from the `authors`.",
      "items": {
        "$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
      },
      "title": "Packaged By",
      "type": "array"
    },
    "parent": {
      "anyOf": [
        {
          "$ref": "#/$defs/LinkedModel"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The model from which this model is derived, e.g. by fine-tuning the weights."
    },
    "run_mode": {
      "anyOf": [
        {
          "$ref": "#/$defs/RunMode"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Custom run mode for this model: for more complex prediction procedures like test time\ndata augmentation that currently cannot be expressed in the specification.\nNo standard run modes are defined yet."
    },
    "timestamp": {
      "$ref": "#/$defs/Datetime",
      "description": "Timestamp in [ISO 8601](#https://en.wikipedia.org/wiki/ISO_8601) format\nwith a few restrictions listed [here](https://docs.python.org/3/library/datetime.html#datetime.datetime.fromisoformat).\n(In Python a datetime object is valid, too)."
    },
    "training_data": {
      "anyOf": [
        {
          "$ref": "#/$defs/LinkedDataset"
        },
        {
          "$ref": "#/$defs/bioimageio__spec__dataset__v0_3__DatasetDescr"
        },
        {
          "$ref": "#/$defs/bioimageio__spec__dataset__v0_2__DatasetDescr"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The dataset used to train this model",
      "title": "Training Data"
    },
    "weights": {
      "$ref": "#/$defs/WeightsDescr",
      "description": "The weights for this model.\nWeights can be given for different formats, but should otherwise be equivalent.\nThe available weight formats determine which consumers can use this model."
    },
    "config": {
      "$ref": "#/$defs/bioimageio__spec__model__v0_5__Config"
    }
  },
  "required": [
    "name",
    "format_version",
    "type",
    "inputs",
    "outputs",
    "weights"
  ],
  "title": "model 0.5.9",
  "type": "object"
}

Fields:

Validators:

  • _check_maintainers_exist
  • warn_about_tag_categoriestags
  • _remove_version_number
  • _validate_documentationdocumentation
  • _validate_input_axesinputs
  • _validate_test_tensors
  • _validate_tensor_references_in_proc_kwargs
  • _validate_tensor_idsoutputs
  • _validate_output_axesoutputs
  • _validate_parent_is_not_self
  • _add_default_cover
  • _convert

attachments pydantic-field ¤

attachments: List[FileDescr_]

file attachments

authors pydantic-field ¤

authors: FAIR[List[Author]]

The authors are the creators of the model RDF and the primary points of contact.

cite pydantic-field ¤

cite: FAIR[List[CiteEntry]]

citations

config pydantic-field ¤

config: Config

covers pydantic-field ¤

covers: List[FileSource_cover]

Cover images.

description pydantic-field ¤

description: FAIR[
    Annotated[
        str,
        MaxLen(1024),
        warn(
            MaxLen(512),
            "Description longer than 512 characters.",
        ),
    ]
] = ""

A string containing a brief description.

documentation pydantic-field ¤

documentation: FAIR[Optional[FileSource_documentation]] = (
    None
)

URL or relative path to a markdown file with additional documentation. The recommended documentation file name is README.md. An .md suffix is mandatory. The documentation should include a '#[#] Validation' (sub)section with details on how to quantitatively validate the model on unseen data.

file_name property ¤

file_name: Optional[FileName]

File name of the bioimageio.yaml file the description was loaded from.

format_version pydantic-field ¤

format_version: Literal['0.5.9'] = '0.5.9'

git_repo pydantic-field ¤

git_repo: Annotated[
    Optional[HttpUrl],
    Field(
        examples=[
            "https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
        ]
    ),
] = None

A URL to the Git repository where the resource is being developed.

icon pydantic-field ¤

icon: Union[
    Annotated[str, Len(min_length=1, max_length=2)],
    FileSource_,
    None,
] = None

An icon for illustration, e.g. on bioimage.io

id pydantic-field ¤

id: Optional[ModelId] = None

bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.

id_emoji pydantic-field ¤

id_emoji: Optional[
    Annotated[
        str,
        Len(min_length=1, max_length=2),
        Field(examples=["🦈", "🦥"]),
    ]
] = None

UTF-8 emoji for display alongside the id.

implemented_format_version class-attribute ¤

implemented_format_version: Literal['0.5.9'] = '0.5.9'

implemented_format_version_tuple class-attribute ¤

implemented_format_version_tuple: Tuple[int, int, int]

implemented_type class-attribute ¤

implemented_type: Literal['model'] = 'model'

inputs pydantic-field ¤

inputs: NotEmpty[Sequence[InputTensorDescr]]

Describes the input tensors expected by this model.

license pydantic-field ¤

license: FAIR[
    Annotated[
        Annotated[
            Union[LicenseId, DeprecatedLicenseId, None],
            Field(union_mode="left_to_right"),
        ],
        warn(
            Optional[LicenseId],
            "{value} is deprecated, see https://spdx.org/licenses/{value}.html",
        ),
        Field(examples=["CC0-1.0", "MIT", "BSD-2-Clause"]),
    ]
] = None

A SPDX license identifier. We do not support custom license beyond the SPDX license list, if you need that please open a GitHub issue to discuss your intentions with the community.

links: Annotated[
    List[str],
    Field(
        examples=[
            (
                "ilastik/ilastik",
                "deepimagej/deepimagej",
                "zero/notebook_u-net_3d_zerocostdl4mic",
            )
        ]
    ),
]

IDs of other bioimage.io resources

maintainers pydantic-field ¤

maintainers: List[Maintainer]

Maintainers of this resource. If not specified, authors are maintainers and at least some of them has to specify their github_user name

name pydantic-field ¤

name: Annotated[
    str,
    RestrictCharacters(
        string.ascii_letters + string.digits + "_+- ()"
    ),
    MinLen(5),
    MaxLen(128),
    warn(
        MaxLen(64), "Name longer than 64 characters.", INFO
    ),
]

A human-readable name of this model. It should be no longer than 64 characters and may only contain letter, number, underscore, minus, parentheses and spaces. We recommend to chose a name that refers to the model's task and image modality.

outputs pydantic-field ¤

outputs: NotEmpty[Sequence[OutputTensorDescr]]

Describes the output tensors.

packaged_by pydantic-field ¤

packaged_by: List[Author]

The persons that have packaged and uploaded this model. Only required if those persons differ from the authors.

parent pydantic-field ¤

parent: Optional[LinkedModel] = None

The model from which this model is derived, e.g. by fine-tuning the weights.

root property ¤

root: Union[RootHttpUrl, DirectoryPath, ZipFile]

The URL/Path prefix to resolve any relative paths with.

run_mode pydantic-field ¤

run_mode: Annotated[
    Optional[RunMode],
    warn(
        None,
        "Run mode '{value}' has limited support across consumer softwares.",
    ),
] = None

Custom run mode for this model: for more complex prediction procedures like test time data augmentation that currently cannot be expressed in the specification. No standard run modes are defined yet.

tags pydantic-field ¤

tags: FAIR[
    Annotated[
        List[str],
        Field(
            examples=[
                (
                    "unet2d",
                    "pytorch",
                    "nucleus",
                    "segmentation",
                    "dsb2018",
                )
            ]
        ),
    ]
]

Associated tags

timestamp pydantic-field ¤

timestamp: Datetime

Timestamp in ISO 8601 format with a few restrictions listed here. (In Python a datetime object is valid, too).

training_data pydantic-field ¤

training_data: Annotated[
    Union[
        None, LinkedDataset, DatasetDescr, DatasetDescr02
    ],
    Field(union_mode="left_to_right"),
] = None

The dataset used to train this model

type pydantic-field ¤

type: Literal['model'] = 'model'

uploader pydantic-field ¤

uploader: Optional[Uploader] = None

The person who uploaded the model (e.g. to bioimage.io)

validation_summary property ¤

validation_summary: ValidationSummary

version pydantic-field ¤

version: Optional[Version] = None

The version of the resource following SemVer 2.0.

version_comment pydantic-field ¤

version_comment: Optional[Annotated[str, MaxLen(512)]] = (
    None
)

A comment on the version of the resource.

weights pydantic-field ¤

The weights for this model. Weights can be given for different formats, but should otherwise be equivalent. The available weight formats determine which consumers can use this model.

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any)
Source code in src/bioimageio/spec/_internal/common_nodes.py
207
208
209
210
211
212
213
214
215
216
217
218
219
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any):
    super().__pydantic_init_subclass__(**kwargs)
    # set classvar implemented_format_version_tuple
    if "format_version" in cls.model_fields:
        if "." not in cls.implemented_format_version:
            cls.implemented_format_version_tuple = (0, 0, 0)
        else:
            fv_tuple = get_format_version_tuple(cls.implemented_format_version)
            assert fv_tuple is not None, (
                f"failed to cast '{cls.implemented_format_version}' to tuple"
            )
            cls.implemented_format_version_tuple = fv_tuple

convert_from_old_format_wo_validation classmethod ¤

convert_from_old_format_wo_validation(
    data: Dict[str, Any],
) -> None

Convert metadata following an older format version to this classes' format without validating the result.

Source code in src/bioimageio/spec/model/v0_5.py
3888
3889
3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
3921
3922
3923
3924
3925
3926
3927
3928
@classmethod
def convert_from_old_format_wo_validation(cls, data: Dict[str, Any]) -> None:
    """Convert metadata following an older format version to this classes' format
    without validating the result.
    """
    if (
        data.get("type") == "model"
        and isinstance(fv := data.get("format_version"), str)
        and fv.count(".") == 2
    ):
        fv_parts = fv.split(".")
        if any(not p.isdigit() for p in fv_parts):
            return

        fv_tuple = tuple(map(int, fv_parts))

        assert cls.implemented_format_version_tuple[0:2] == (0, 5)
        if fv_tuple[:2] in ((0, 3), (0, 4)):
            m04 = _ModelDescr_v0_4.load(data)
            if isinstance(m04, InvalidDescr):
                try:
                    updated = _model_conv.convert_as_dict(
                        m04  # pyright: ignore[reportArgumentType]
                    )
                except Exception as e:
                    logger.error(
                        "Failed to convert from invalid model 0.4 description."
                        + f"\nerror: {e}"
                        + "\nProceeding with model 0.5 validation without conversion."
                    )
                    updated = None
            else:
                updated = _model_conv.convert_as_dict(m04)

            if updated is not None:
                data.clear()
                data.update(updated)

        elif fv_tuple[:2] == (0, 5):
            # bump patch version
            data["format_version"] = cls.implemented_format_version

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get_axis_sizes ¤

get_axis_sizes(
    ns: Mapping[
        Tuple[TensorId, AxisId], ParameterizedSize_N
    ],
    batch_size: Optional[int] = None,
    *,
    max_input_shape: Optional[
        Mapping[Tuple[TensorId, AxisId], int]
    ] = None,
) -> _AxisSizes

Determine input and output block shape for scale factors ns of parameterized input sizes.

Parameters:

Name Type Description Default

ns ¤

Mapping[Tuple[TensorId, AxisId], ParameterizedSize_N]

Scale factor n for each axis (keyed by (tensor_id, axis_id)) that is parameterized as size = min + n * step.

required

batch_size ¤

Optional[int]

The desired size of the batch dimension. If given batch_size overwrites any batch size present in max_input_shape. Default 1.

None

max_input_shape ¤

Optional[Mapping[Tuple[TensorId, AxisId], int]]

Limits the derived block shapes. Each axis for which the input size, parameterized by n, is larger than max_input_shape is set to the minimal value n_min for which this is still true. Use this for small input samples or large values of ns. Or simply whenever you know the full input shape.

None

Returns:

Type Description
_AxisSizes

Resolved axis sizes for model inputs and outputs.

Source code in src/bioimageio/spec/model/v0_5.py
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
3802
3803
3804
3805
3806
3807
3808
3809
3810
3811
3812
3813
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
3860
3861
3862
3863
3864
3865
3866
3867
3868
3869
3870
3871
3872
3873
3874
3875
3876
3877
3878
3879
3880
def get_axis_sizes(
    self,
    ns: Mapping[Tuple[TensorId, AxisId], ParameterizedSize_N],
    batch_size: Optional[int] = None,
    *,
    max_input_shape: Optional[Mapping[Tuple[TensorId, AxisId], int]] = None,
) -> _AxisSizes:
    """Determine input and output block shape for scale factors **ns**
    of parameterized input sizes.

    Args:
        ns: Scale factor `n` for each axis (keyed by (tensor_id, axis_id))
            that is parameterized as `size = min + n * step`.
        batch_size: The desired size of the batch dimension.
            If given **batch_size** overwrites any batch size present in
            **max_input_shape**. Default 1.
        max_input_shape: Limits the derived block shapes.
            Each axis for which the input size, parameterized by `n`, is larger
            than **max_input_shape** is set to the minimal value `n_min` for which
            this is still true.
            Use this for small input samples or large values of **ns**.
            Or simply whenever you know the full input shape.

    Returns:
        Resolved axis sizes for model inputs and outputs.
    """
    max_input_shape = max_input_shape or {}
    if batch_size is None:
        for (_t_id, a_id), s in max_input_shape.items():
            if a_id == BATCH_AXIS_ID:
                batch_size = s
                break
        else:
            batch_size = 1

    all_axes = {
        t.id: {a.id: a for a in t.axes} for t in chain(self.inputs, self.outputs)
    }

    inputs: Dict[Tuple[TensorId, AxisId], int] = {}
    outputs: Dict[Tuple[TensorId, AxisId], Union[int, _DataDepSize]] = {}

    def get_axis_size(a: Union[InputAxis, OutputAxis]):
        if isinstance(a, BatchAxis):
            if (t_descr.id, a.id) in ns:
                logger.warning(
                    "Ignoring unexpected size increment factor (n) for batch axis"
                    + " of tensor '{}'.",
                    t_descr.id,
                )
            return batch_size
        elif isinstance(a.size, int):
            if (t_descr.id, a.id) in ns:
                logger.warning(
                    "Ignoring unexpected size increment factor (n) for fixed size"
                    + " axis '{}' of tensor '{}'.",
                    a.id,
                    t_descr.id,
                )
            return a.size
        elif isinstance(a.size, ParameterizedSize):
            if (t_descr.id, a.id) not in ns:
                raise ValueError(
                    "Size increment factor (n) missing for parametrized axis"
                    + f" '{a.id}' of tensor '{t_descr.id}'."
                )
            n = ns[(t_descr.id, a.id)]
            s_max = max_input_shape.get((t_descr.id, a.id))
            if s_max is not None:
                n = min(n, a.size.get_n(s_max))

            return a.size.get_size(n)

        elif isinstance(a.size, SizeReference):
            if (t_descr.id, a.id) in ns:
                logger.warning(
                    "Ignoring unexpected size increment factor (n) for axis '{}'"
                    + " of tensor '{}' with size reference.",
                    a.id,
                    t_descr.id,
                )
            assert not isinstance(a, BatchAxis)
            ref_axis = all_axes[a.size.tensor_id][a.size.axis_id]
            assert not isinstance(ref_axis, BatchAxis)
            ref_key = (a.size.tensor_id, a.size.axis_id)
            ref_size = inputs.get(ref_key, outputs.get(ref_key))
            assert ref_size is not None, ref_key
            assert not isinstance(ref_size, _DataDepSize), ref_key
            return a.size.get_size(
                axis=a,
                ref_axis=ref_axis,
                ref_size=ref_size,
            )
        elif isinstance(a.size, DataDependentSize):
            if (t_descr.id, a.id) in ns:
                logger.warning(
                    "Ignoring unexpected increment factor (n) for data dependent"
                    + " size axis '{}' of tensor '{}'.",
                    a.id,
                    t_descr.id,
                )
            return _DataDepSize(a.size.min, a.size.max)
        else:
            assert_never(a.size)

    # first resolve all , but the `SizeReference` input sizes
    for t_descr in self.inputs:
        for a in t_descr.axes:
            if not isinstance(a.size, SizeReference):
                s = get_axis_size(a)
                assert not isinstance(s, _DataDepSize)
                inputs[t_descr.id, a.id] = s

    # resolve all other input axis sizes
    for t_descr in self.inputs:
        for a in t_descr.axes:
            if isinstance(a.size, SizeReference):
                s = get_axis_size(a)
                assert not isinstance(s, _DataDepSize)
                inputs[t_descr.id, a.id] = s

    # resolve all output axis sizes
    for t_descr in self.outputs:
        for a in t_descr.axes:
            assert not isinstance(a.size, ParameterizedSize)
            s = get_axis_size(a)
            outputs[t_descr.id, a.id] = s

    return _AxisSizes(inputs=inputs, outputs=outputs)

get_batch_size staticmethod ¤

get_batch_size(
    tensor_sizes: Mapping[TensorId, Mapping[AxisId, int]],
) -> int
Source code in src/bioimageio/spec/model/v0_5.py
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
@staticmethod
def get_batch_size(tensor_sizes: Mapping[TensorId, Mapping[AxisId, int]]) -> int:
    batch_size = 1
    tensor_with_batchsize: Optional[TensorId] = None
    for tid in tensor_sizes:
        for aid, s in tensor_sizes[tid].items():
            if aid != BATCH_AXIS_ID or s == 1 or s == batch_size:
                continue

            if batch_size != 1:
                assert tensor_with_batchsize is not None
                raise ValueError(
                    f"batch size mismatch for tensors '{tensor_with_batchsize}' ({batch_size}) and '{tid}' ({s})"
                )

            batch_size = s
            tensor_with_batchsize = tid

    return batch_size

get_input_test_arrays ¤

get_input_test_arrays() -> List[NDArray[Any]]
Source code in src/bioimageio/spec/model/v0_5.py
3658
3659
def get_input_test_arrays(self) -> List[NDArray[Any]]:
    return self._get_test_arrays(self.inputs)

get_ns ¤

get_ns(
    input_sizes: Mapping[TensorId, Mapping[AxisId, int]],
)

get parameter n for each parameterized axis such that the valid input size is >= the given input size

Source code in src/bioimageio/spec/model/v0_5.py
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
def get_ns(self, input_sizes: Mapping[TensorId, Mapping[AxisId, int]]):
    """get parameter `n` for each parameterized axis
    such that the valid input size is >= the given input size"""
    ret: Dict[Tuple[TensorId, AxisId], ParameterizedSize_N] = {}
    axes = {t.id: {a.id: a for a in t.axes} for t in self.inputs}
    for tid in input_sizes:
        for aid, s in input_sizes[tid].items():
            size_descr = axes[tid][aid].size
            if isinstance(size_descr, ParameterizedSize):
                ret[(tid, aid)] = size_descr.get_n(s)
            elif size_descr is None or isinstance(size_descr, (int, SizeReference)):
                pass
            else:
                assert_never(size_descr)

    return ret

get_output_tensor_sizes ¤

get_output_tensor_sizes(
    input_sizes: Mapping[TensorId, Mapping[AxisId, int]],
) -> Dict[TensorId, Dict[AxisId, Union[int, _DataDepSize]]]

Returns the tensor output sizes for given input_sizes. Only if input_sizes has a valid input shape, the tensor output size is exact. Otherwise it might be larger than the actual (valid) output

Source code in src/bioimageio/spec/model/v0_5.py
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
def get_output_tensor_sizes(
    self, input_sizes: Mapping[TensorId, Mapping[AxisId, int]]
) -> Dict[TensorId, Dict[AxisId, Union[int, _DataDepSize]]]:
    """Returns the tensor output sizes for given **input_sizes**.
    Only if **input_sizes** has a valid input shape, the tensor output size is exact.
    Otherwise it might be larger than the actual (valid) output"""
    batch_size = self.get_batch_size(input_sizes)
    ns = self.get_ns(input_sizes)

    tensor_sizes = self.get_tensor_sizes(ns, batch_size=batch_size)
    return tensor_sizes.outputs

get_output_test_arrays ¤

get_output_test_arrays() -> List[NDArray[Any]]
Source code in src/bioimageio/spec/model/v0_5.py
3661
3662
def get_output_test_arrays(self) -> List[NDArray[Any]]:
    return self._get_test_arrays(self.outputs)

get_package_content ¤

get_package_content() -> Dict[
    FileName, Union[FileDescr, BioimageioYamlContent]
]

Returns package content without creating the package.

Source code in src/bioimageio/spec/_internal/common_nodes.py
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
def get_package_content(
    self,
) -> Dict[FileName, Union[FileDescr, BioimageioYamlContent]]:
    """Returns package content without creating the package."""
    content: Dict[FileName, FileDescr] = {}
    with PackagingContext(
        bioimageio_yaml_file_name=BIOIMAGEIO_YAML,
        file_sources=content,
    ):
        rdf_content: BioimageioYamlContent = self.model_dump(
            mode="json", exclude_unset=True
        )

    _ = rdf_content.pop("rdf_source", None)

    return {**content, BIOIMAGEIO_YAML: rdf_content}

get_tensor_sizes ¤

get_tensor_sizes(
    ns: Mapping[
        Tuple[TensorId, AxisId], ParameterizedSize_N
    ],
    batch_size: int,
) -> _TensorSizes
Source code in src/bioimageio/spec/model/v0_5.py
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
def get_tensor_sizes(
    self, ns: Mapping[Tuple[TensorId, AxisId], ParameterizedSize_N], batch_size: int
) -> _TensorSizes:
    axis_sizes = self.get_axis_sizes(ns, batch_size=batch_size)
    return _TensorSizes(
        {
            t: {
                aa: axis_sizes.inputs[(tt, aa)]
                for tt, aa in axis_sizes.inputs
                if tt == t
            }
            for t in {tt for tt, _ in axis_sizes.inputs}
        },
        {
            t: {
                aa: axis_sizes.outputs[(tt, aa)]
                for tt, aa in axis_sizes.outputs
                if tt == t
            }
            for t in {tt for tt, _ in axis_sizes.outputs}
        },
    )

load classmethod ¤

load(
    data: IncompleteDescrView,
    context: Optional[ValidationContext] = None,
) -> Union[Self, InvalidDescr]

factory method to create a resource description object

Source code in src/bioimageio/spec/_internal/common_nodes.py
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
@classmethod
def load(
    cls,
    data: IncompleteDescrView,
    context: Optional[ValidationContext] = None,
) -> Union[Self, InvalidDescr]:
    """factory method to create a resource description object"""

    context = context or get_validation_context()
    if context.perform_io_checks:
        file_descrs = extract_file_descrs(data)
        populate_cache(file_descrs)  # TODO: add progress bar

    with context.replace(log_warnings=context.warning_level <= INFO):
        rd, errors, val_warnings = cls._load_impl(deepcopy_incomplete_descr(data))

    if context.warning_level > INFO:
        all_warnings_context = context.replace(
            warning_level=INFO, log_warnings=False, raise_errors=False
        )
        # raise all validation warnings by reloading
        with all_warnings_context:
            _, _, val_warnings = cls._load_impl(deepcopy_incomplete_descr(data))

    format_status = "failed" if errors else "passed"
    rd.validation_summary.add_detail(
        ValidationDetail(
            errors=errors,
            name=(
                "bioimageio.spec format validation"
                f" {rd.type} {cls.implemented_format_version}"
            ),
            status=format_status,
            warnings=val_warnings,
        ),
        update_status=False,  # this special validation detail needs manual format updating below
    )
    assert format_status != "failed" or isinstance(rd, InvalidDescr)

    return rd

load_from_kwargs classmethod ¤

load_from_kwargs(
    context: Optional[ValidationContext] = None,
    *args: P.args,
    **kwargs: P.kwargs,
) -> Union[T, InvalidDescr]
Source code in src/bioimageio/spec/_internal/common_nodes.py
221
222
223
224
225
226
227
228
229
230
@classmethod
def load_from_kwargs(
    cls: Callable[P, T],
    context: Optional[ValidationContext] = None,
    *args: P.args,
    **kwargs: P.kwargs,
) -> Union[T, InvalidDescr]:
    sig = signature(cls)
    bound = sig.bind_partial(*args, **kwargs)
    return cls.load(dict(bound.arguments), context=context)  # pyright: ignore[reportFunctionMemberAccess]

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

package ¤

package(
    dest: Optional[
        Union[ZipFile, IO[bytes], Path, str]
    ] = None,
) -> ZipFile

package the described resource as a zip archive

Parameters:

Name Type Description Default

dest ¤

Optional[Union[ZipFile, IO[bytes], Path, str]]

(path/bytes stream of) destination zipfile

None
Source code in src/bioimageio/spec/_internal/common_nodes.py
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
def package(
    self, dest: Optional[Union[ZipFile, IO[bytes], Path, str]] = None, /
) -> ZipFile:
    """package the described resource as a zip archive

    Args:
        dest: (path/bytes stream of) destination zipfile
    """
    if dest is None:
        dest = BytesIO()

    if isinstance(dest, ZipFile):
        zip = dest
        if "r" in zip.mode:
            raise ValueError(
                f"zip file {dest} opened in '{zip.mode}' mode,"
                + " but write access is needed for packaging."
            )
    else:
        zip = ZipFile(dest, mode="w")

    if zip.filename is None:
        zip.filename = (
            str(getattr(self, "id", getattr(self, "name", "bioimageio"))) + ".zip"
        )

    content = self.get_package_content()
    write_content_to_zip(content, zip)
    return zip

validate_input_tensors ¤

validate_input_tensors(
    sources: Union[
        Sequence[NDArray[Any]],
        Mapping[TensorId, Optional[NDArray[Any]]],
    ],
) -> Mapping[TensorId, Optional[NDArray[Any]]]

Check if the given input tensors match the model's input tensor descriptions. This includes checks of tensor shapes and dtypes, but not of the actual values.

Source code in src/bioimageio/spec/model/v0_5.py
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
def validate_input_tensors(
    self,
    sources: Union[
        Sequence[NDArray[Any]], Mapping[TensorId, Optional[NDArray[Any]]]
    ],
) -> Mapping[TensorId, Optional[NDArray[Any]]]:
    """Check if the given input tensors match the model's input tensor descriptions.
    This includes checks of tensor shapes and dtypes, but not of the actual values.
    """
    if not isinstance(sources, collections.abc.Mapping):
        sources = {descr.id: tensor for descr, tensor in zip(self.inputs, sources)}

    tensors = {descr.id: (descr, sources.get(descr.id)) for descr in self.inputs}
    validate_tensors(tensors)

    return sources

warn_about_tag_categories pydantic-validator ¤

warn_about_tag_categories(
    value: List[str], info: ValidationInfo
) -> List[str]
Source code in src/bioimageio/spec/generic/v0_3.py
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
@as_warning
@field_validator("tags")
@classmethod
def warn_about_tag_categories(
    cls, value: List[str], info: ValidationInfo
) -> List[str]:
    categories = TAG_CATEGORIES.get(info.data["type"], {})
    missing_categories: List[Dict[str, Sequence[str]]] = []
    for cat, entries in categories.items():
        if not any(e in value for e in entries):
            missing_categories.append({cat: entries})

    if missing_categories:
        raise ValueError(
            f"Missing tags from bioimage.io categories: {missing_categories}"
        )

    return value

ModelId ¤

Bases: ResourceId


              flowchart TD
              bioimageio.spec.model.v0_5.ModelId[ModelId]
              bioimageio.spec.generic.v0_3.ResourceId[ResourceId]
              bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]

                              bioimageio.spec.generic.v0_3.ResourceId --> bioimageio.spec.model.v0_5.ModelId
                                bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.generic.v0_3.ResourceId
                



              click bioimageio.spec.model.v0_5.ModelId href "" "bioimageio.spec.model.v0_5.ModelId"
              click bioimageio.spec.generic.v0_3.ResourceId href "" "bioimageio.spec.generic.v0_3.ResourceId"
              click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
            

Methods:

Name Description
__get_pydantic_core_schema__
__get_pydantic_json_schema__
__new__

Attributes:

Name Type Description
root_model Type[RootModel[Any]]

the pydantic root model to validate the string

root_model class-attribute ¤

root_model: Type[RootModel[Any]] = RootModel[
    Annotated[
        NotEmpty[str],
        RestrictCharacters(
            string.ascii_lowercase + string.digits + "_-/."
        ),
        annotated_types.Predicate(
            lambda s: (
                not (s.startswith("/") or s.endswith("/"))
            )
        ),
    ]
]

the pydantic root model to validate the string

__get_pydantic_core_schema__ classmethod ¤

__get_pydantic_core_schema__(
    source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29
30
31
32
33
@classmethod
def __get_pydantic_core_schema__(
    cls, source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema:
    return no_info_after_validator_function(cls, handler(str))

__get_pydantic_json_schema__ classmethod ¤

__get_pydantic_json_schema__(
    core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35
36
37
38
39
40
41
42
43
44
@classmethod
def __get_pydantic_json_schema__(
    cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue:
    json_schema = cls.root_model.model_json_schema(mode=handler.mode)
    json_schema["title"] = cls.__name__.strip("_")
    if cls.__doc__:
        json_schema["description"] = cls.__doc__

    return json_schema

__new__ ¤

__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19
20
21
22
23
def __new__(cls, object: object):
    _validated = cls.root_model.model_validate(object).root
    self = super().__new__(cls, _validated)
    self._validated = _validated
    return self._after_validator()

NominalOrOrdinalDataDescr pydantic-model ¤

Bases: Node

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "values": {
      "anyOf": [
        {
          "items": {
            "type": "integer"
          },
          "minItems": 1,
          "type": "array"
        },
        {
          "items": {
            "type": "number"
          },
          "minItems": 1,
          "type": "array"
        },
        {
          "items": {
            "type": "boolean"
          },
          "minItems": 1,
          "type": "array"
        },
        {
          "items": {
            "type": "string"
          },
          "minItems": 1,
          "type": "array"
        }
      ],
      "description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
      "title": "Values"
    },
    "type": {
      "default": "uint8",
      "enum": [
        "float32",
        "float64",
        "uint8",
        "int8",
        "uint16",
        "int16",
        "uint32",
        "int32",
        "uint64",
        "int64",
        "bool"
      ],
      "examples": [
        "float32",
        "uint8",
        "uint16",
        "int64",
        "bool"
      ],
      "title": "Type",
      "type": "string"
    },
    "unit": {
      "anyOf": [
        {
          "const": "arbitrary unit",
          "type": "string"
        },
        {
          "description": "An SI unit",
          "minLength": 1,
          "pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
          "title": "SiUnit",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Unit"
    }
  },
  "required": [
    "values"
  ],
  "title": "model.v0_5.NominalOrOrdinalDataDescr",
  "type": "object"
}

Fields:

Validators:

  • _validate_values_match_type

range property ¤

range

type pydantic-field ¤

type: Annotated[
    NominalOrOrdinalDType,
    Field(
        examples=[
            "float32",
            "uint8",
            "uint16",
            "int64",
            "bool",
        ]
    ),
] = "uint8"

unit pydantic-field ¤

unit: Optional[Union[Literal["arbitrary unit"], SiUnit]] = (
    None
)

values pydantic-field ¤

values: TVs

A fixed set of nominal or an ascending sequence of ordinal values. In this case data.type is required to be an unsigend integer type, e.g. 'uint8'. String values are interpreted as labels for tensor values 0, ..., N. Note: as YAML 1.2 does not natively support a "set" datatype, nominal values should be given as a sequence (aka list/array) as well.

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

OnnxWeightsDescr pydantic-model ¤

Bases: WeightsEntryDescrBase

Show JSON schema:
{
  "$defs": {
    "Author": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "title": "Name",
          "type": "string"
        },
        "github_user": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Github User"
        }
      },
      "required": [
        "name"
      ],
      "title": "generic.v0_3.Author",
      "type": "object"
    },
    "FileDescr": {
      "additionalProperties": false,
      "description": "A file description",
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "File source",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        }
      },
      "required": [
        "source"
      ],
      "title": "_internal.io.FileDescr",
      "type": "object"
    },
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    }
  },
  "additionalProperties": false,
  "properties": {
    "source": {
      "anyOf": [
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "$ref": "#/$defs/RelativeFilePath"
        },
        {
          "format": "file-path",
          "title": "FilePath",
          "type": "string"
        }
      ],
      "description": "Source of the weights file.",
      "title": "Source"
    },
    "sha256": {
      "anyOf": [
        {
          "description": "A SHA-256 hash value",
          "maxLength": 64,
          "minLength": 64,
          "title": "Sha256",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "SHA256 hash value of the **source** file.",
      "title": "Sha256"
    },
    "authors": {
      "anyOf": [
        {
          "items": {
            "$ref": "#/$defs/Author"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
      "title": "Authors"
    },
    "parent": {
      "anyOf": [
        {
          "enum": [
            "keras_hdf5",
            "keras_v3",
            "onnx",
            "pytorch_state_dict",
            "tensorflow_js",
            "tensorflow_saved_model_bundle",
            "torchscript"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
      "examples": [
        "pytorch_state_dict"
      ],
      "title": "Parent"
    },
    "comment": {
      "default": "",
      "description": "A comment about this weights entry, for example how these weights were created.",
      "title": "Comment",
      "type": "string"
    },
    "opset_version": {
      "description": "ONNX opset version",
      "minimum": 7,
      "title": "Opset Version",
      "type": "integer"
    },
    "external_data": {
      "anyOf": [
        {
          "$ref": "#/$defs/FileDescr",
          "examples": [
            {
              "source": "weights.onnx.data"
            }
          ]
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Source of the external ONNX data file holding the weights.\n(If present **source** holds the ONNX architecture without weights)."
    }
  },
  "required": [
    "source",
    "opset_version"
  ],
  "title": "model.v0_5.OnnxWeightsDescr",
  "type": "object"
}

Fields:

Validators:

  • _validate_sha256
  • _validate
  • _validate_external_data_unique_file_name

authors pydantic-field ¤

authors: Optional[List[Author]] = None

Authors Either the person(s) that have trained this model resulting in the original weights file. (If this is the initial weights entry, i.e. it does not have a parent) Or the person(s) who have converted the weights to this weights format. (If this is a child weight, i.e. it has a parent field)

comment pydantic-field ¤

comment: str = ''

A comment about this weights entry, for example how these weights were created.

external_data pydantic-field ¤

external_data: Optional[FileDescr_external_data] = None

Source of the external ONNX data file holding the weights. (If present source holds the ONNX architecture without weights).

opset_version pydantic-field ¤

opset_version: Annotated[int, Ge(7)]

ONNX opset version

parent pydantic-field ¤

parent: Annotated[
    Optional[WeightsFormat],
    Field(examples=["pytorch_state_dict"]),
] = None

The source weights these weights were converted from. For example, if a model's weights were converted from the pytorch_state_dict format to torchscript, The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights. All weight entries except one (the initial set of weights resulting from training the model), need to have this field.

sha256 pydantic-field ¤

sha256: Optional[Sha256] = None

SHA256 hash value of the source file.

source pydantic-field ¤

source: Annotated[
    FileSource, AfterValidator(wo_special_file_name)
]

Source of the weights file.

suffix property ¤

suffix: str

type class-attribute ¤

type: WeightsFormat = 'onnx'

weights_format_name class-attribute ¤

weights_format_name: str = 'ONNX'

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

download ¤

download(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

alias for .get_reader

Source code in src/bioimageio/spec/_internal/io.py
319
320
321
322
323
324
325
326
327
def download(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """alias for `.get_reader`"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

get_reader ¤

get_reader(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

open the file source (download if needed)

Source code in src/bioimageio/spec/_internal/io.py
309
310
311
312
313
314
315
316
317
def get_reader(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """open the file source (download if needed)"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

validate_sha256 ¤

validate_sha256(force_recompute: bool = False) -> None

validate the sha256 hash value of the source file

Source code in src/bioimageio/spec/_internal/io.py
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def validate_sha256(self, force_recompute: bool = False) -> None:
    """validate the sha256 hash value of the **source** file"""
    context = get_validation_context()
    src_str = str(self.source)
    if force_recompute:
        actual_sha = None
    else:
        actual_sha = context.known_files.get(src_str)

    if actual_sha is None:
        if context.perform_io_checks or force_recompute:
            reader = get_reader(self.source, sha256=self.sha256)
            if force_recompute:
                actual_sha = get_sha256(reader)
            else:
                actual_sha = reader.sha256

            context.known_files[src_str] = actual_sha
        elif context.known_files and src_str not in context.known_files:
            # perform_io_checks is False, but known files were given,
            # so we expect all file references to be in there
            raise ValueError(f"File {src_str} not found in `known_files`.")

    if actual_sha is None or self.sha256 == actual_sha:
        return
    elif self.sha256 is None or context.update_hashes:
        self.sha256 = actual_sha
    elif self.sha256 != actual_sha:
        raise ValueError(
            f"Sha256 mismatch for {self.source}. Expected {self.sha256}, got "
            + f"{actual_sha}. Update expected `sha256` or point to the matching "
            + "file."
        )

OrcidId ¤

Bases: ValidatedString


              flowchart TD
              bioimageio.spec.model.v0_5.OrcidId[OrcidId]
              bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]

                              bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.OrcidId
                


              click bioimageio.spec.model.v0_5.OrcidId href "" "bioimageio.spec.model.v0_5.OrcidId"
              click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
            

An ORCID identifier, see https://orcid.org/

Methods:

Name Description
__get_pydantic_core_schema__
__get_pydantic_json_schema__
__new__

Attributes:

Name Type Description
root_model Type[RootModel[Any]]

the pydantic root model to validate the string

root_model class-attribute ¤

root_model: Type[RootModel[Any]] = RootModel[
    Annotated[str, AfterValidator(_validate_orcid_id)]
]

the pydantic root model to validate the string

__get_pydantic_core_schema__ classmethod ¤

__get_pydantic_core_schema__(
    source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29
30
31
32
33
@classmethod
def __get_pydantic_core_schema__(
    cls, source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema:
    return no_info_after_validator_function(cls, handler(str))

__get_pydantic_json_schema__ classmethod ¤

__get_pydantic_json_schema__(
    core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35
36
37
38
39
40
41
42
43
44
@classmethod
def __get_pydantic_json_schema__(
    cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue:
    json_schema = cls.root_model.model_json_schema(mode=handler.mode)
    json_schema["title"] = cls.__name__.strip("_")
    if cls.__doc__:
        json_schema["description"] = cls.__doc__

    return json_schema

__new__ ¤

__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19
20
21
22
23
def __new__(cls, object: object):
    _validated = cls.root_model.model_validate(object).root
    self = super().__new__(cls, _validated)
    self._validated = _validated
    return self._after_validator()

OutputTensorDescr pydantic-model ¤

Bases: TensorDescrBase[OutputAxis]

Show JSON schema:
{
  "$defs": {
    "BatchAxis": {
      "additionalProperties": false,
      "properties": {
        "id": {
          "default": "batch",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "batch",
          "title": "Type",
          "type": "string"
        },
        "size": {
          "anyOf": [
            {
              "const": 1,
              "type": "integer"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
          "title": "Size"
        }
      },
      "required": [
        "type"
      ],
      "title": "model.v0_5.BatchAxis",
      "type": "object"
    },
    "BinarizeAlongAxisKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [BinarizeDescr][]",
      "properties": {
        "threshold": {
          "description": "The fixed threshold values along `axis`",
          "items": {
            "type": "number"
          },
          "minItems": 1,
          "title": "Threshold",
          "type": "array"
        },
        "axis": {
          "description": "The `threshold` axis",
          "examples": [
            "channel"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        }
      },
      "required": [
        "threshold",
        "axis"
      ],
      "title": "model.v0_5.BinarizeAlongAxisKwargs",
      "type": "object"
    },
    "BinarizeDescr": {
      "additionalProperties": false,
      "description": "Binarize the tensor with a fixed threshold.\n\nValues above [BinarizeKwargs.threshold][]/[BinarizeAlongAxisKwargs.threshold][]\nwill be set to one, values below the threshold to zero.\n\nExamples:\n- in YAML\n    ```yaml\n    postprocessing:\n      - id: binarize\n        kwargs:\n          axis: 'channel'\n          threshold: [0.25, 0.5, 0.75]\n    ```\n- in Python:\n\n    >>> postprocessing = [BinarizeDescr(\n    ...   kwargs=BinarizeAlongAxisKwargs(\n    ...       axis=AxisId('channel'),\n    ...       threshold=[0.25, 0.5, 0.75],\n    ...   )\n    ... )]",
      "properties": {
        "id": {
          "const": "binarize",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "anyOf": [
            {
              "$ref": "#/$defs/BinarizeKwargs"
            },
            {
              "$ref": "#/$defs/BinarizeAlongAxisKwargs"
            }
          ],
          "title": "Kwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.BinarizeDescr",
      "type": "object"
    },
    "BinarizeKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [BinarizeDescr][]",
      "properties": {
        "threshold": {
          "description": "The fixed threshold",
          "title": "Threshold",
          "type": "number"
        }
      },
      "required": [
        "threshold"
      ],
      "title": "model.v0_5.BinarizeKwargs",
      "type": "object"
    },
    "ChannelAxis": {
      "additionalProperties": false,
      "properties": {
        "id": {
          "default": "channel",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "channel",
          "title": "Type",
          "type": "string"
        },
        "channel_names": {
          "items": {
            "minLength": 1,
            "title": "Identifier",
            "type": "string"
          },
          "minItems": 1,
          "title": "Channel Names",
          "type": "array"
        }
      },
      "required": [
        "type",
        "channel_names"
      ],
      "title": "model.v0_5.ChannelAxis",
      "type": "object"
    },
    "ClipDescr": {
      "additionalProperties": false,
      "description": "Set tensor values below min to min and above max to max.\n\nSee `ScaleRangeDescr` for examples.",
      "properties": {
        "id": {
          "const": "clip",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/ClipKwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.ClipDescr",
      "type": "object"
    },
    "ClipKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ClipDescr][]",
      "properties": {
        "min": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Minimum value for clipping.\n\nExclusive with [min_percentile][]",
          "title": "Min"
        },
        "min_percentile": {
          "anyOf": [
            {
              "exclusiveMaximum": 100,
              "minimum": 0,
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Minimum percentile for clipping.\n\nExclusive with [min][].\n\nIn range [0, 100).",
          "title": "Min Percentile"
        },
        "max": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Maximum value for clipping.\n\nExclusive with `max_percentile`.",
          "title": "Max"
        },
        "max_percentile": {
          "anyOf": [
            {
              "exclusiveMinimum": 1,
              "maximum": 100,
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Maximum percentile for clipping.\n\nExclusive with `max`.\n\nIn range (1, 100].",
          "title": "Max Percentile"
        },
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to determine percentiles jointly,\n\ni.e. axes to reduce to compute min/max from `min_percentile`/`max_percentile`.\nFor example to clip 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape with clipped values per channel, specify `axes=('batch', 'x', 'y')`.\nTo clip samples independently, leave out the 'batch' axis.\n\nOnly valid if `min_percentile` and/or `max_percentile` are set.\n\nDefault: Compute percentiles over all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        }
      },
      "title": "model.v0_5.ClipKwargs",
      "type": "object"
    },
    "DataDependentSize": {
      "additionalProperties": false,
      "properties": {
        "min": {
          "default": 1,
          "exclusiveMinimum": 0,
          "title": "Min",
          "type": "integer"
        },
        "max": {
          "anyOf": [
            {
              "exclusiveMinimum": 1,
              "type": "integer"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Max"
        }
      },
      "title": "model.v0_5.DataDependentSize",
      "type": "object"
    },
    "EnsureDtypeDescr": {
      "additionalProperties": false,
      "description": "Cast the tensor data type to `EnsureDtypeKwargs.dtype` (if not matching).\n\nThis can for example be used to ensure the inner neural network model gets a\ndifferent input tensor data type than the fully described bioimage.io model does.\n\nExamples:\n    The described bioimage.io model (incl. preprocessing) accepts any\n    float32-compatible tensor, normalizes it with percentiles and clipping and then\n    casts it to uint8, which is what the neural network in this example expects.\n    - in YAML\n        ```yaml\n        inputs:\n        - data:\n            type: float32  # described bioimage.io model is compatible with any float32 input tensor\n          preprocessing:\n          - id: scale_range\n              kwargs:\n              axes: ['y', 'x']\n              max_percentile: 99.8\n              min_percentile: 5.0\n          - id: clip\n              kwargs:\n              min: 0.0\n              max: 1.0\n          - id: ensure_dtype  # the neural network of the model requires uint8\n              kwargs:\n              dtype: uint8\n        ```\n    - in Python:\n        >>> preprocessing = [\n        ...     ScaleRangeDescr(\n        ...         kwargs=ScaleRangeKwargs(\n        ...           axes= (AxisId('y'), AxisId('x')),\n        ...           max_percentile= 99.8,\n        ...           min_percentile= 5.0,\n        ...         )\n        ...     ),\n        ...     ClipDescr(kwargs=ClipKwargs(min=0.0, max=1.0)),\n        ...     EnsureDtypeDescr(kwargs=EnsureDtypeKwargs(dtype=\"uint8\")),\n        ... ]",
      "properties": {
        "id": {
          "const": "ensure_dtype",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/EnsureDtypeKwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.EnsureDtypeDescr",
      "type": "object"
    },
    "EnsureDtypeKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [EnsureDtypeDescr][]",
      "properties": {
        "dtype": {
          "enum": [
            "float32",
            "float64",
            "uint8",
            "int8",
            "uint16",
            "int16",
            "uint32",
            "int32",
            "uint64",
            "int64",
            "bool"
          ],
          "title": "Dtype",
          "type": "string"
        }
      },
      "required": [
        "dtype"
      ],
      "title": "model.v0_5.EnsureDtypeKwargs",
      "type": "object"
    },
    "FileDescr": {
      "additionalProperties": false,
      "description": "A file description",
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "File source",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        }
      },
      "required": [
        "source"
      ],
      "title": "_internal.io.FileDescr",
      "type": "object"
    },
    "FixedZeroMeanUnitVarianceAlongAxisKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
      "properties": {
        "mean": {
          "description": "The mean value(s) to normalize with.",
          "items": {
            "type": "number"
          },
          "minItems": 1,
          "title": "Mean",
          "type": "array"
        },
        "std": {
          "description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
          "items": {
            "minimum": 1e-06,
            "type": "number"
          },
          "minItems": 1,
          "title": "Std",
          "type": "array"
        },
        "axis": {
          "description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
          "examples": [
            "channel",
            "index"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        }
      },
      "required": [
        "mean",
        "std",
        "axis"
      ],
      "title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
      "type": "object"
    },
    "FixedZeroMeanUnitVarianceDescr": {
      "additionalProperties": false,
      "description": "Subtract a given mean and divide by the standard deviation.\n\nNormalize with fixed, precomputed values for\n`FixedZeroMeanUnitVarianceKwargs.mean` and `FixedZeroMeanUnitVarianceKwargs.std`\nUse `FixedZeroMeanUnitVarianceAlongAxisKwargs` for independent scaling along given\naxes.\n\nExamples:\n1. scalar value for whole tensor\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: fixed_zero_mean_unit_variance\n        kwargs:\n          mean: 103.5\n          std: 13.7\n    ```\n    - in Python\n    >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n    ...   kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7)\n    ... )]\n\n2. independently along an axis\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: fixed_zero_mean_unit_variance\n        kwargs:\n          axis: channel\n          mean: [101.5, 102.5, 103.5]\n          std: [11.7, 12.7, 13.7]\n    ```\n    - in Python\n    >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n    ...   kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs(\n    ...     axis=AxisId(\"channel\"),\n    ...     mean=[101.5, 102.5, 103.5],\n    ...     std=[11.7, 12.7, 13.7],\n    ...   )\n    ... )]",
      "properties": {
        "id": {
          "const": "fixed_zero_mean_unit_variance",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "anyOf": [
            {
              "$ref": "#/$defs/FixedZeroMeanUnitVarianceKwargs"
            },
            {
              "$ref": "#/$defs/FixedZeroMeanUnitVarianceAlongAxisKwargs"
            }
          ],
          "title": "Kwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.FixedZeroMeanUnitVarianceDescr",
      "type": "object"
    },
    "FixedZeroMeanUnitVarianceKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
      "properties": {
        "mean": {
          "description": "The mean value to normalize with.",
          "title": "Mean",
          "type": "number"
        },
        "std": {
          "description": "The standard deviation value to normalize with.",
          "minimum": 1e-06,
          "title": "Std",
          "type": "number"
        }
      },
      "required": [
        "mean",
        "std"
      ],
      "title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
      "type": "object"
    },
    "IndexOutputAxis": {
      "additionalProperties": false,
      "properties": {
        "id": {
          "default": "index",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "index",
          "title": "Type",
          "type": "string"
        },
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/SizeReference"
            },
            {
              "$ref": "#/$defs/DataDependentSize"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset ([SizeReference][])\n- data dependent size using [DataDependentSize][] (size is only known after model inference)",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        }
      },
      "required": [
        "type",
        "size"
      ],
      "title": "model.v0_5.IndexOutputAxis",
      "type": "object"
    },
    "IntervalOrRatioDataDescr": {
      "additionalProperties": false,
      "properties": {
        "type": {
          "default": "float32",
          "enum": [
            "float32",
            "float64",
            "uint8",
            "int8",
            "uint16",
            "int16",
            "uint32",
            "int32",
            "uint64",
            "int64"
          ],
          "examples": [
            "float32",
            "float64",
            "uint8",
            "uint16"
          ],
          "title": "Type",
          "type": "string"
        },
        "range": {
          "default": [
            null,
            null
          ],
          "description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
          "maxItems": 2,
          "minItems": 2,
          "prefixItems": [
            {
              "anyOf": [
                {
                  "type": "number"
                },
                {
                  "type": "null"
                }
              ]
            },
            {
              "anyOf": [
                {
                  "type": "number"
                },
                {
                  "type": "null"
                }
              ]
            }
          ],
          "title": "Range",
          "type": "array"
        },
        "unit": {
          "anyOf": [
            {
              "const": "arbitrary unit",
              "type": "string"
            },
            {
              "description": "An SI unit",
              "minLength": 1,
              "pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
              "title": "SiUnit",
              "type": "string"
            }
          ],
          "default": "arbitrary unit",
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "description": "Scale for data on an interval (or ratio) scale.",
          "title": "Scale",
          "type": "number"
        },
        "offset": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Offset for data on a ratio scale.",
          "title": "Offset"
        }
      },
      "title": "model.v0_5.IntervalOrRatioDataDescr",
      "type": "object"
    },
    "NominalOrOrdinalDataDescr": {
      "additionalProperties": false,
      "properties": {
        "values": {
          "anyOf": [
            {
              "items": {
                "type": "integer"
              },
              "minItems": 1,
              "type": "array"
            },
            {
              "items": {
                "type": "number"
              },
              "minItems": 1,
              "type": "array"
            },
            {
              "items": {
                "type": "boolean"
              },
              "minItems": 1,
              "type": "array"
            },
            {
              "items": {
                "type": "string"
              },
              "minItems": 1,
              "type": "array"
            }
          ],
          "description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
          "title": "Values"
        },
        "type": {
          "default": "uint8",
          "enum": [
            "float32",
            "float64",
            "uint8",
            "int8",
            "uint16",
            "int16",
            "uint32",
            "int32",
            "uint64",
            "int64",
            "bool"
          ],
          "examples": [
            "float32",
            "uint8",
            "uint16",
            "int64",
            "bool"
          ],
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "const": "arbitrary unit",
              "type": "string"
            },
            {
              "description": "An SI unit",
              "minLength": 1,
              "pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
              "title": "SiUnit",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        }
      },
      "required": [
        "values"
      ],
      "title": "model.v0_5.NominalOrOrdinalDataDescr",
      "type": "object"
    },
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    },
    "ScaleLinearAlongAxisKwargs": {
      "additionalProperties": false,
      "description": "Key word arguments for [ScaleLinearDescr][]",
      "properties": {
        "axis": {
          "description": "The axis of gain and offset values.",
          "examples": [
            "channel"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "gain": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "items": {
                "type": "number"
              },
              "minItems": 1,
              "type": "array"
            }
          ],
          "default": 1.0,
          "description": "multiplicative factor",
          "title": "Gain"
        },
        "offset": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "items": {
                "type": "number"
              },
              "minItems": 1,
              "type": "array"
            }
          ],
          "default": 0.0,
          "description": "additive term",
          "title": "Offset"
        }
      },
      "required": [
        "axis"
      ],
      "title": "model.v0_5.ScaleLinearAlongAxisKwargs",
      "type": "object"
    },
    "ScaleLinearDescr": {
      "additionalProperties": false,
      "description": "Fixed linear scaling.\n\nExamples:\n  1. Scale with scalar gain and offset\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_linear\n        kwargs:\n          gain: 2.0\n          offset: 3.0\n    ```\n    - in Python:\n\n    >>> preprocessing = [\n    ...     ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0))\n    ... ]\n\n  2. Independent scaling along an axis\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_linear\n        kwargs:\n          axis: 'channel'\n          gain: [1.0, 2.0, 3.0]\n    ```\n    - in Python:\n\n    >>> preprocessing = [\n    ...     ScaleLinearDescr(\n    ...         kwargs=ScaleLinearAlongAxisKwargs(\n    ...             axis=AxisId(\"channel\"),\n    ...             gain=[1.0, 2.0, 3.0],\n    ...         )\n    ...     )\n    ... ]",
      "properties": {
        "id": {
          "const": "scale_linear",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "anyOf": [
            {
              "$ref": "#/$defs/ScaleLinearKwargs"
            },
            {
              "$ref": "#/$defs/ScaleLinearAlongAxisKwargs"
            }
          ],
          "title": "Kwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.ScaleLinearDescr",
      "type": "object"
    },
    "ScaleLinearKwargs": {
      "additionalProperties": false,
      "description": "Key word arguments for [ScaleLinearDescr][]",
      "properties": {
        "gain": {
          "default": 1.0,
          "description": "multiplicative factor",
          "title": "Gain",
          "type": "number"
        },
        "offset": {
          "default": 0.0,
          "description": "additive term",
          "title": "Offset",
          "type": "number"
        }
      },
      "title": "model.v0_5.ScaleLinearKwargs",
      "type": "object"
    },
    "ScaleMeanVarianceDescr": {
      "additionalProperties": false,
      "description": "Scale a tensor's data distribution to match another tensor's mean/std.\n`out  = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
      "properties": {
        "id": {
          "const": "scale_mean_variance",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/ScaleMeanVarianceKwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.ScaleMeanVarianceDescr",
      "type": "object"
    },
    "ScaleMeanVarianceKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ScaleMeanVarianceKwargs][]",
      "properties": {
        "reference_tensor": {
          "description": "ID of unprocessed input tensor to match.",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        },
        "eps": {
          "default": 1e-06,
          "description": "Epsilon for numeric stability:\n`out  = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
          "exclusiveMinimum": 0,
          "maximum": 0.1,
          "title": "Eps",
          "type": "number"
        }
      },
      "required": [
        "reference_tensor"
      ],
      "title": "model.v0_5.ScaleMeanVarianceKwargs",
      "type": "object"
    },
    "ScaleRangeDescr": {
      "additionalProperties": false,
      "description": "Scale with percentiles.\n\nExamples:\n1. Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_range\n        kwargs:\n          axes: ['y', 'x']\n          max_percentile: 99.8\n          min_percentile: 5.0\n    ```\n    - in Python\n\n    >>> preprocessing = [\n    ...     ScaleRangeDescr(\n    ...         kwargs=ScaleRangeKwargs(\n    ...           axes= (AxisId('y'), AxisId('x')),\n    ...           max_percentile= 99.8,\n    ...           min_percentile= 5.0,\n    ...         )\n    ...     )\n    ... ]\n\n  2. Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_range\n        kwargs:\n          axes: ['y', 'x']\n          max_percentile: 99.8\n          min_percentile: 5.0\n       - id: clip\n         kwargs:\n          min: 0.0\n          max: 1.0\n    ```\n    - in Python\n\n    >>> preprocessing = [\n    ...   ScaleRangeDescr(\n    ...     kwargs=ScaleRangeKwargs(\n    ...       axes= (AxisId('y'), AxisId('x')),\n    ...       max_percentile= 99.8,\n    ...       min_percentile= 5.0,\n    ...     )\n    ...   ),\n    ...   ClipDescr(\n    ...     kwargs=ClipKwargs(\n    ...       min=0.0,\n    ...       max=1.0,\n    ...     )\n    ...   ),\n    ... ]",
      "properties": {
        "id": {
          "const": "scale_range",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/ScaleRangeKwargs"
        }
      },
      "required": [
        "id"
      ],
      "title": "model.v0_5.ScaleRangeDescr",
      "type": "object"
    },
    "ScaleRangeKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ScaleRangeDescr][]\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
      "properties": {
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        },
        "min_percentile": {
          "default": 0.0,
          "description": "The lower percentile used to determine the value to align with zero.",
          "exclusiveMaximum": 100,
          "minimum": 0,
          "title": "Min Percentile",
          "type": "number"
        },
        "max_percentile": {
          "default": 100.0,
          "description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
          "exclusiveMinimum": 1,
          "maximum": 100,
          "title": "Max Percentile",
          "type": "number"
        },
        "eps": {
          "default": 1e-06,
          "description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
          "exclusiveMinimum": 0,
          "maximum": 0.1,
          "title": "Eps",
          "type": "number"
        },
        "reference_tensor": {
          "anyOf": [
            {
              "maxLength": 32,
              "minLength": 1,
              "title": "TensorId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "ID of the unprocessed input tensor to compute the percentiles from.\nDefault: The tensor itself.",
          "title": "Reference Tensor"
        }
      },
      "title": "model.v0_5.ScaleRangeKwargs",
      "type": "object"
    },
    "SigmoidDescr": {
      "additionalProperties": false,
      "description": "The logistic sigmoid function, a.k.a. expit function.\n\nExamples:\n- in YAML\n    ```yaml\n    postprocessing:\n      - id: sigmoid\n    ```\n- in Python:\n\n    >>> postprocessing = [SigmoidDescr()]",
      "properties": {
        "id": {
          "const": "sigmoid",
          "title": "Id",
          "type": "string"
        }
      },
      "required": [
        "id"
      ],
      "title": "model.v0_5.SigmoidDescr",
      "type": "object"
    },
    "SizeReference": {
      "additionalProperties": false,
      "description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n    `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n...     id=AxisId(\"h\"),\n...     size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n...     unit=\"millimeter\",\n...     scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
      "properties": {
        "tensor_id": {
          "description": "tensor id of the reference axis",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axis_id": {
          "description": "axis id of the reference axis",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "offset": {
          "default": 0,
          "title": "Offset",
          "type": "integer"
        }
      },
      "required": [
        "tensor_id",
        "axis_id"
      ],
      "title": "model.v0_5.SizeReference",
      "type": "object"
    },
    "SoftmaxDescr": {
      "additionalProperties": false,
      "description": "The softmax function.\n\nExamples:\n- in YAML\n    ```yaml\n    postprocessing:\n      - id: softmax\n        kwargs:\n          axis: channel\n    ```\n- in Python:\n\n    >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId(\"channel\")))]",
      "properties": {
        "id": {
          "const": "softmax",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/SoftmaxKwargs"
        }
      },
      "required": [
        "id"
      ],
      "title": "model.v0_5.SoftmaxDescr",
      "type": "object"
    },
    "SoftmaxKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [SoftmaxDescr][]",
      "properties": {
        "axis": {
          "default": "channel",
          "description": "The axis to apply the softmax function along.\nNote:\n    Defaults to 'channel' axis\n    (which may not exist, in which case\n    a different axis id has to be specified).",
          "examples": [
            "channel"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        }
      },
      "title": "model.v0_5.SoftmaxKwargs",
      "type": "object"
    },
    "SpaceOutputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "x",
          "examples": [
            "x",
            "y",
            "z"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "space",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attometer",
                "angstrom",
                "centimeter",
                "decimeter",
                "exameter",
                "femtometer",
                "foot",
                "gigameter",
                "hectometer",
                "inch",
                "kilometer",
                "megameter",
                "meter",
                "micrometer",
                "mile",
                "millimeter",
                "nanometer",
                "parsec",
                "petameter",
                "picometer",
                "terameter",
                "yard",
                "yoctometer",
                "yottameter",
                "zeptometer",
                "zettameter"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.SpaceOutputAxis",
      "type": "object"
    },
    "SpaceOutputAxisWithHalo": {
      "additionalProperties": false,
      "properties": {
        "halo": {
          "description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
          "minimum": 1,
          "title": "Halo",
          "type": "integer"
        },
        "size": {
          "$ref": "#/$defs/SizeReference",
          "description": "reference to another axis with an optional offset (see [SizeReference][])",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ]
        },
        "id": {
          "default": "x",
          "examples": [
            "x",
            "y",
            "z"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "space",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attometer",
                "angstrom",
                "centimeter",
                "decimeter",
                "exameter",
                "femtometer",
                "foot",
                "gigameter",
                "hectometer",
                "inch",
                "kilometer",
                "megameter",
                "meter",
                "micrometer",
                "mile",
                "millimeter",
                "nanometer",
                "parsec",
                "petameter",
                "picometer",
                "terameter",
                "yard",
                "yoctometer",
                "yottameter",
                "zeptometer",
                "zettameter"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        }
      },
      "required": [
        "halo",
        "size",
        "type"
      ],
      "title": "model.v0_5.SpaceOutputAxisWithHalo",
      "type": "object"
    },
    "StardistPostprocessingDescr": {
      "additionalProperties": false,
      "description": "Stardist postprocessing including non-maximum suppression and converting polygon representations to instance labels\n\nas described in:\n- Uwe Schmidt, Martin Weigert, Coleman Broaddus, and Gene Myers.\n[*Cell Detection with Star-convex Polygons*](https://arxiv.org/abs/1806.03535).\nInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Granada, Spain, September 2018.\n- Martin Weigert, Uwe Schmidt, Robert Haase, Ko Sugawara, and Gene Myers.\n[*Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy*](http://openaccess.thecvf.com/content_WACV_2020/papers/Weigert_Star-convex_Polyhedra_for_3D_Object_Detection_and_Segmentation_in_Microscopy_WACV_2020_paper.pdf).\nThe IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, Colorado, March 2020.\n\nNote: Only available if the `stardist` package is installed.",
      "properties": {
        "id": {
          "const": "stardist_postprocessing",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "anyOf": [
            {
              "$ref": "#/$defs/StardistPostprocessingKwargs2D"
            },
            {
              "$ref": "#/$defs/StardistPostprocessingKwargs3D"
            }
          ],
          "title": "Kwargs"
        }
      },
      "required": [
        "id",
        "kwargs"
      ],
      "title": "model.v0_5.StardistPostprocessingDescr",
      "type": "object"
    },
    "StardistPostprocessingKwargs2D": {
      "additionalProperties": false,
      "properties": {
        "prob_threshold": {
          "description": "The probability threshold for object candidate selection.",
          "title": "Prob Threshold",
          "type": "number"
        },
        "nms_threshold": {
          "description": "The IoU threshold for non-maximum suppression.",
          "title": "Nms Threshold",
          "type": "number"
        },
        "grid": {
          "description": "Grid size of network predictions.",
          "maxItems": 2,
          "minItems": 2,
          "prefixItems": [
            {
              "type": "integer"
            },
            {
              "type": "integer"
            }
          ],
          "title": "Grid",
          "type": "array"
        },
        "b": {
          "anyOf": [
            {
              "type": "integer"
            },
            {
              "maxItems": 2,
              "minItems": 2,
              "prefixItems": [
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                },
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                }
              ],
              "type": "array"
            }
          ],
          "description": "Border region in which object probability is set to zero.",
          "title": "B"
        }
      },
      "required": [
        "prob_threshold",
        "nms_threshold",
        "grid",
        "b"
      ],
      "title": "model.v0_5.StardistPostprocessingKwargs2D",
      "type": "object"
    },
    "StardistPostprocessingKwargs3D": {
      "additionalProperties": false,
      "properties": {
        "prob_threshold": {
          "description": "The probability threshold for object candidate selection.",
          "title": "Prob Threshold",
          "type": "number"
        },
        "nms_threshold": {
          "description": "The IoU threshold for non-maximum suppression.",
          "title": "Nms Threshold",
          "type": "number"
        },
        "grid": {
          "description": "Grid size of network predictions.",
          "maxItems": 3,
          "minItems": 3,
          "prefixItems": [
            {
              "type": "integer"
            },
            {
              "type": "integer"
            },
            {
              "type": "integer"
            }
          ],
          "title": "Grid",
          "type": "array"
        },
        "b": {
          "anyOf": [
            {
              "type": "integer"
            },
            {
              "maxItems": 3,
              "minItems": 3,
              "prefixItems": [
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                },
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                },
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                }
              ],
              "type": "array"
            }
          ],
          "description": "Border region in which object probability is set to zero.",
          "title": "B"
        },
        "n_rays": {
          "description": "Number of rays for 3D star-convex polyhedra.",
          "title": "N Rays",
          "type": "integer"
        },
        "anisotropy": {
          "description": "Anisotropy factors for 3D star-convex polyhedra, i.e. the physical pixel size along each spatial axis.",
          "maxItems": 3,
          "minItems": 3,
          "prefixItems": [
            {
              "type": "number"
            },
            {
              "type": "number"
            },
            {
              "type": "number"
            }
          ],
          "title": "Anisotropy",
          "type": "array"
        },
        "overlap_label": {
          "anyOf": [
            {
              "type": "integer"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Optional label to apply to any area of overlapping predicted objects.",
          "title": "Overlap Label"
        }
      },
      "required": [
        "prob_threshold",
        "nms_threshold",
        "grid",
        "b",
        "n_rays",
        "anisotropy"
      ],
      "title": "model.v0_5.StardistPostprocessingKwargs3D",
      "type": "object"
    },
    "TimeOutputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "time",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "time",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attosecond",
                "centisecond",
                "day",
                "decisecond",
                "exasecond",
                "femtosecond",
                "gigasecond",
                "hectosecond",
                "hour",
                "kilosecond",
                "megasecond",
                "microsecond",
                "millisecond",
                "minute",
                "nanosecond",
                "petasecond",
                "picosecond",
                "second",
                "terasecond",
                "yoctosecond",
                "yottasecond",
                "zeptosecond",
                "zettasecond"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.TimeOutputAxis",
      "type": "object"
    },
    "TimeOutputAxisWithHalo": {
      "additionalProperties": false,
      "properties": {
        "halo": {
          "description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
          "minimum": 1,
          "title": "Halo",
          "type": "integer"
        },
        "size": {
          "$ref": "#/$defs/SizeReference",
          "description": "reference to another axis with an optional offset (see [SizeReference][])",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ]
        },
        "id": {
          "default": "time",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "time",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attosecond",
                "centisecond",
                "day",
                "decisecond",
                "exasecond",
                "femtosecond",
                "gigasecond",
                "hectosecond",
                "hour",
                "kilosecond",
                "megasecond",
                "microsecond",
                "millisecond",
                "minute",
                "nanosecond",
                "petasecond",
                "picosecond",
                "second",
                "terasecond",
                "yoctosecond",
                "yottasecond",
                "zeptosecond",
                "zettasecond"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        }
      },
      "required": [
        "halo",
        "size",
        "type"
      ],
      "title": "model.v0_5.TimeOutputAxisWithHalo",
      "type": "object"
    },
    "ZeroMeanUnitVarianceDescr": {
      "additionalProperties": false,
      "description": "Subtract mean and divide by variance.\n\nExamples:\n    Subtract tensor mean and variance\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: zero_mean_unit_variance\n    ```\n    - in Python\n    >>> preprocessing = [ZeroMeanUnitVarianceDescr()]",
      "properties": {
        "id": {
          "const": "zero_mean_unit_variance",
          "title": "Id",
          "type": "string"
        },
        "kwargs": {
          "$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
        }
      },
      "required": [
        "id"
      ],
      "title": "model.v0_5.ZeroMeanUnitVarianceDescr",
      "type": "object"
    },
    "ZeroMeanUnitVarianceKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ZeroMeanUnitVarianceDescr][]",
      "properties": {
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        },
        "eps": {
          "default": 1e-06,
          "description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
          "exclusiveMinimum": 0,
          "maximum": 0.1,
          "title": "Eps",
          "type": "number"
        }
      },
      "title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "properties": {
    "id": {
      "default": "output",
      "description": "Output tensor id.\nNo duplicates are allowed across all inputs and outputs.",
      "maxLength": 32,
      "minLength": 1,
      "title": "TensorId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "free text description",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "axes": {
      "description": "tensor axes",
      "items": {
        "discriminator": {
          "mapping": {
            "batch": "#/$defs/BatchAxis",
            "channel": "#/$defs/ChannelAxis",
            "index": "#/$defs/IndexOutputAxis",
            "space": {
              "oneOf": [
                {
                  "$ref": "#/$defs/SpaceOutputAxis"
                },
                {
                  "$ref": "#/$defs/SpaceOutputAxisWithHalo"
                }
              ]
            },
            "time": {
              "oneOf": [
                {
                  "$ref": "#/$defs/TimeOutputAxis"
                },
                {
                  "$ref": "#/$defs/TimeOutputAxisWithHalo"
                }
              ]
            }
          },
          "propertyName": "type"
        },
        "oneOf": [
          {
            "$ref": "#/$defs/BatchAxis"
          },
          {
            "$ref": "#/$defs/ChannelAxis"
          },
          {
            "$ref": "#/$defs/IndexOutputAxis"
          },
          {
            "oneOf": [
              {
                "$ref": "#/$defs/TimeOutputAxis"
              },
              {
                "$ref": "#/$defs/TimeOutputAxisWithHalo"
              }
            ]
          },
          {
            "oneOf": [
              {
                "$ref": "#/$defs/SpaceOutputAxis"
              },
              {
                "$ref": "#/$defs/SpaceOutputAxisWithHalo"
              }
            ]
          }
        ]
      },
      "minItems": 1,
      "title": "Axes",
      "type": "array"
    },
    "test_tensor": {
      "anyOf": [
        {
          "$ref": "#/$defs/FileDescr"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
    },
    "sample_tensor": {
      "anyOf": [
        {
          "$ref": "#/$defs/FileDescr"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
    },
    "data": {
      "anyOf": [
        {
          "$ref": "#/$defs/NominalOrOrdinalDataDescr"
        },
        {
          "$ref": "#/$defs/IntervalOrRatioDataDescr"
        },
        {
          "items": {
            "anyOf": [
              {
                "$ref": "#/$defs/NominalOrOrdinalDataDescr"
              },
              {
                "$ref": "#/$defs/IntervalOrRatioDataDescr"
              }
            ]
          },
          "minItems": 1,
          "type": "array"
        }
      ],
      "default": {
        "type": "float32",
        "range": [
          null,
          null
        ],
        "unit": "arbitrary unit",
        "scale": 1.0,
        "offset": null
      },
      "description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
      "title": "Data"
    },
    "postprocessing": {
      "description": "Description of how this output should be postprocessed.\n\nnote: `postprocessing` always ends with an 'ensure_dtype' operation.\n      If not given this is added to cast to this tensor's `data.type`.",
      "items": {
        "discriminator": {
          "mapping": {
            "binarize": "#/$defs/BinarizeDescr",
            "clip": "#/$defs/ClipDescr",
            "ensure_dtype": "#/$defs/EnsureDtypeDescr",
            "fixed_zero_mean_unit_variance": "#/$defs/FixedZeroMeanUnitVarianceDescr",
            "scale_linear": "#/$defs/ScaleLinearDescr",
            "scale_mean_variance": "#/$defs/ScaleMeanVarianceDescr",
            "scale_range": "#/$defs/ScaleRangeDescr",
            "sigmoid": "#/$defs/SigmoidDescr",
            "softmax": "#/$defs/SoftmaxDescr",
            "stardist_postprocessing": "#/$defs/StardistPostprocessingDescr",
            "zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
          },
          "propertyName": "id"
        },
        "oneOf": [
          {
            "$ref": "#/$defs/BinarizeDescr"
          },
          {
            "$ref": "#/$defs/ClipDescr"
          },
          {
            "$ref": "#/$defs/EnsureDtypeDescr"
          },
          {
            "$ref": "#/$defs/FixedZeroMeanUnitVarianceDescr"
          },
          {
            "$ref": "#/$defs/ScaleLinearDescr"
          },
          {
            "$ref": "#/$defs/ScaleMeanVarianceDescr"
          },
          {
            "$ref": "#/$defs/ScaleRangeDescr"
          },
          {
            "$ref": "#/$defs/SigmoidDescr"
          },
          {
            "$ref": "#/$defs/SoftmaxDescr"
          },
          {
            "$ref": "#/$defs/StardistPostprocessingDescr"
          },
          {
            "$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
          }
        ]
      },
      "title": "Postprocessing",
      "type": "array"
    }
  },
  "required": [
    "axes"
  ],
  "title": "model.v0_5.OutputTensorDescr",
  "type": "object"
}

Fields:

Validators:

  • _validate_axesaxes
  • _validate_sample_tensor
  • _check_data_type_across_channelsdata
  • _check_data_matches_channelaxis
  • _validate_postprocessing_kwargs

axes pydantic-field ¤

axes: NotEmpty[Sequence[IO_AxisT]]

tensor axes

data pydantic-field ¤

data: Union[
    TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]
]

Description of the tensor's data values, optionally per channel. If specified per channel, the data type needs to match across channels.

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

free text description

dtype property ¤

dtype: Literal[
    "float32",
    "float64",
    "uint8",
    "int8",
    "uint16",
    "int16",
    "uint32",
    "int32",
    "uint64",
    "int64",
    "bool",
]

dtype as specified under data.type or data[i].type

id pydantic-field ¤

Output tensor id. No duplicates are allowed across all inputs and outputs.

postprocessing pydantic-field ¤

postprocessing: List[PostprocessingDescr]

Description of how this output should be postprocessed.

postprocessing always ends with an 'ensure_dtype' operation.

If not given this is added to cast to this tensor's data.type.

sample_tensor pydantic-field ¤

sample_tensor: FAIR[Optional[FileDescr_]] = None

A sample tensor to illustrate a possible input/output for the model, The sample image primarily serves to inform a human user about an example use case and is typically stored as .hdf5, .png or .tiff. It has to be readable by the imageio library (numpy's .npy format is not supported). The image dimensionality has to match the number of axes specified in this tensor description.

shape property ¤

shape

test_tensor pydantic-field ¤

test_tensor: FAIR[Optional[FileDescr_]] = None

An example tensor to use for testing. Using the model with the test input tensors is expected to yield the test output tensors. Each test tensor has be a an ndarray in the numpy.lib file format. The file extension must be '.npy'.

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get_axis_sizes_for_array ¤

get_axis_sizes_for_array(
    array: NDArray[Any],
) -> Dict[AxisId, int]
Source code in src/bioimageio/spec/model/v0_5.py
1834
1835
1836
1837
1838
1839
1840
def get_axis_sizes_for_array(self, array: NDArray[Any]) -> Dict[AxisId, int]:
    if len(array.shape) != len(self.axes):
        raise ValueError(
            f"Dimension mismatch: array shape {array.shape} (#{len(array.shape)})"
            + f" incompatible with {len(self.axes)} axes."
        )
    return {a.id: array.shape[i] for i, a in enumerate(self.axes)}

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

ParameterizedSize pydantic-model ¤

Bases: Node

Describes a range of valid tensor axis sizes as size = min + n*step.

  • min and step are given by the model description.
  • All blocksize paramters n = 0,1,2,... yield a valid size.
  • A greater blocksize paramter n = 0,1,2,... results in a greater size. This allows to adjust the axis size more generically.
Show JSON schema:
{
  "additionalProperties": false,
  "description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n  This allows to adjust the axis size more generically.",
  "properties": {
    "min": {
      "exclusiveMinimum": 0,
      "title": "Min",
      "type": "integer"
    },
    "step": {
      "exclusiveMinimum": 0,
      "title": "Step",
      "type": "integer"
    }
  },
  "required": [
    "min",
    "step"
  ],
  "title": "model.v0_5.ParameterizedSize",
  "type": "object"
}

Fields:

  • min (Annotated[int, Gt(0)])
  • step (Annotated[int, Gt(0)])

N class-attribute ¤

N: Type[int] = ParameterizedSize_N

Positive integer to parameterize this axis

min pydantic-field ¤

min: Annotated[int, Gt(0)]

step pydantic-field ¤

step: Annotated[int, Gt(0)]

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get_n ¤

get_n(s: int) -> ParameterizedSize_N

return smallest n parameterizing a size greater or equal than s

Source code in src/bioimageio/spec/model/v0_5.py
338
339
340
def get_n(self, s: int) -> ParameterizedSize_N:
    """return smallest n parameterizing a size greater or equal than `s`"""
    return ceil((s - self.min) / self.step)

get_size ¤

get_size(n: ParameterizedSize_N) -> int
Source code in src/bioimageio/spec/model/v0_5.py
335
336
def get_size(self, n: ParameterizedSize_N) -> int:
    return self.min + self.step * n

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

validate_size ¤

validate_size(size: int, msg_prefix: str = '') -> int
Source code in src/bioimageio/spec/model/v0_5.py
322
323
324
325
326
327
328
329
330
331
332
333
def validate_size(self, size: int, msg_prefix: str = "") -> int:
    if size < self.min:
        raise ValueError(
            f"{msg_prefix}size {size} < {self.min} (minimum axis size)"
        )
    if (size - self.min) % self.step != 0:
        raise ValueError(
            f"{msg_prefix}size {size} is not parameterized by `min + n*step` ="
            + f" `{self.min} + n*{self.step}`"
        )

    return size

PytorchStateDictWeightsDescr pydantic-model ¤

Bases: WeightsEntryDescrBase

Show JSON schema:
{
  "$defs": {
    "ArchitectureFromFileDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "Architecture source file",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "callable": {
          "description": "Identifier of the callable that returns a torch.nn.Module instance.",
          "examples": [
            "MyNetworkClass",
            "get_my_model"
          ],
          "minLength": 1,
          "title": "Identifier",
          "type": "string"
        },
        "kwargs": {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "description": "key word arguments for the `callable`",
          "title": "Kwargs",
          "type": "object"
        }
      },
      "required": [
        "source",
        "callable"
      ],
      "title": "model.v0_5.ArchitectureFromFileDescr",
      "type": "object"
    },
    "ArchitectureFromLibraryDescr": {
      "additionalProperties": false,
      "properties": {
        "callable": {
          "description": "Identifier of the callable that returns a torch.nn.Module instance.",
          "examples": [
            "MyNetworkClass",
            "get_my_model"
          ],
          "minLength": 1,
          "title": "Identifier",
          "type": "string"
        },
        "kwargs": {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "description": "key word arguments for the `callable`",
          "title": "Kwargs",
          "type": "object"
        },
        "import_from": {
          "description": "Where to import the callable from, i.e. `from <import_from> import <callable>`",
          "title": "Import From",
          "type": "string"
        }
      },
      "required": [
        "callable",
        "import_from"
      ],
      "title": "model.v0_5.ArchitectureFromLibraryDescr",
      "type": "object"
    },
    "Author": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "title": "Name",
          "type": "string"
        },
        "github_user": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Github User"
        }
      },
      "required": [
        "name"
      ],
      "title": "generic.v0_3.Author",
      "type": "object"
    },
    "FileDescr": {
      "additionalProperties": false,
      "description": "A file description",
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "File source",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        }
      },
      "required": [
        "source"
      ],
      "title": "_internal.io.FileDescr",
      "type": "object"
    },
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    },
    "Version": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        }
      ],
      "description": "wraps a packaging.version.Version instance for validation in pydantic models",
      "title": "Version"
    },
    "YamlValue": {
      "anyOf": [
        {
          "type": "boolean"
        },
        {
          "format": "date",
          "type": "string"
        },
        {
          "format": "date-time",
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        },
        {
          "type": "string"
        },
        {
          "items": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "array"
        },
        {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "object"
        },
        {
          "type": "null"
        }
      ]
    }
  },
  "additionalProperties": false,
  "properties": {
    "source": {
      "anyOf": [
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "$ref": "#/$defs/RelativeFilePath"
        },
        {
          "format": "file-path",
          "title": "FilePath",
          "type": "string"
        }
      ],
      "description": "Source of the weights file.",
      "title": "Source"
    },
    "sha256": {
      "anyOf": [
        {
          "description": "A SHA-256 hash value",
          "maxLength": 64,
          "minLength": 64,
          "title": "Sha256",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "SHA256 hash value of the **source** file.",
      "title": "Sha256"
    },
    "authors": {
      "anyOf": [
        {
          "items": {
            "$ref": "#/$defs/Author"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
      "title": "Authors"
    },
    "parent": {
      "anyOf": [
        {
          "enum": [
            "keras_hdf5",
            "keras_v3",
            "onnx",
            "pytorch_state_dict",
            "tensorflow_js",
            "tensorflow_saved_model_bundle",
            "torchscript"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
      "examples": [
        "pytorch_state_dict"
      ],
      "title": "Parent"
    },
    "comment": {
      "default": "",
      "description": "A comment about this weights entry, for example how these weights were created.",
      "title": "Comment",
      "type": "string"
    },
    "architecture": {
      "anyOf": [
        {
          "$ref": "#/$defs/ArchitectureFromFileDescr"
        },
        {
          "$ref": "#/$defs/ArchitectureFromLibraryDescr"
        }
      ],
      "title": "Architecture"
    },
    "pytorch_version": {
      "$ref": "#/$defs/Version",
      "description": "Version of the PyTorch library used.\nIf `architecture.depencencies` is specified it has to include pytorch and any version pinning has to be compatible."
    },
    "dependencies": {
      "anyOf": [
        {
          "$ref": "#/$defs/FileDescr",
          "examples": [
            {
              "source": "environment.yaml"
            }
          ]
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Custom depencies beyond pytorch described in a Conda environment file.\nAllows to specify custom dependencies, see conda docs:\n- [Exporting an environment file across platforms](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#exporting-an-environment-file-across-platforms)\n- [Creating an environment file manually](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually)\n\nThe conda environment file should include pytorch and any version pinning has to be compatible with\n**pytorch_version**."
    }
  },
  "required": [
    "source",
    "architecture",
    "pytorch_version"
  ],
  "title": "model.v0_5.PytorchStateDictWeightsDescr",
  "type": "object"
}

Fields:

Validators:

  • _validate_sha256
  • _validate

architecture pydantic-field ¤

authors pydantic-field ¤

authors: Optional[List[Author]] = None

Authors Either the person(s) that have trained this model resulting in the original weights file. (If this is the initial weights entry, i.e. it does not have a parent) Or the person(s) who have converted the weights to this weights format. (If this is a child weight, i.e. it has a parent field)

comment pydantic-field ¤

comment: str = ''

A comment about this weights entry, for example how these weights were created.

dependencies pydantic-field ¤

dependencies: Optional[FileDescr_dependencies] = None

Custom depencies beyond pytorch described in a Conda environment file. Allows to specify custom dependencies, see conda docs: - Exporting an environment file across platforms - Creating an environment file manually

The conda environment file should include pytorch and any version pinning has to be compatible with pytorch_version.

parent pydantic-field ¤

parent: Annotated[
    Optional[WeightsFormat],
    Field(examples=["pytorch_state_dict"]),
] = None

The source weights these weights were converted from. For example, if a model's weights were converted from the pytorch_state_dict format to torchscript, The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights. All weight entries except one (the initial set of weights resulting from training the model), need to have this field.

pytorch_version pydantic-field ¤

pytorch_version: Version

Version of the PyTorch library used. If architecture.depencencies is specified it has to include pytorch and any version pinning has to be compatible.

sha256 pydantic-field ¤

sha256: Optional[Sha256] = None

SHA256 hash value of the source file.

source pydantic-field ¤

source: Annotated[
    FileSource, AfterValidator(wo_special_file_name)
]

Source of the weights file.

suffix property ¤

suffix: str

type class-attribute ¤

type: WeightsFormat = 'pytorch_state_dict'

weights_format_name class-attribute ¤

weights_format_name: str = 'Pytorch State Dict'

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

download ¤

download(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

alias for .get_reader

Source code in src/bioimageio/spec/_internal/io.py
319
320
321
322
323
324
325
326
327
def download(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """alias for `.get_reader`"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

get_reader ¤

get_reader(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

open the file source (download if needed)

Source code in src/bioimageio/spec/_internal/io.py
309
310
311
312
313
314
315
316
317
def get_reader(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """open the file source (download if needed)"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

validate_sha256 ¤

validate_sha256(force_recompute: bool = False) -> None

validate the sha256 hash value of the source file

Source code in src/bioimageio/spec/_internal/io.py
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def validate_sha256(self, force_recompute: bool = False) -> None:
    """validate the sha256 hash value of the **source** file"""
    context = get_validation_context()
    src_str = str(self.source)
    if force_recompute:
        actual_sha = None
    else:
        actual_sha = context.known_files.get(src_str)

    if actual_sha is None:
        if context.perform_io_checks or force_recompute:
            reader = get_reader(self.source, sha256=self.sha256)
            if force_recompute:
                actual_sha = get_sha256(reader)
            else:
                actual_sha = reader.sha256

            context.known_files[src_str] = actual_sha
        elif context.known_files and src_str not in context.known_files:
            # perform_io_checks is False, but known files were given,
            # so we expect all file references to be in there
            raise ValueError(f"File {src_str} not found in `known_files`.")

    if actual_sha is None or self.sha256 == actual_sha:
        return
    elif self.sha256 is None or context.update_hashes:
        self.sha256 = actual_sha
    elif self.sha256 != actual_sha:
        raise ValueError(
            f"Sha256 mismatch for {self.source}. Expected {self.sha256}, got "
            + f"{actual_sha}. Update expected `sha256` or point to the matching "
            + "file."
        )

RelativeFilePath ¤

Bases: RelativePathBase[Union[AbsoluteFilePath, HttpUrl, ZipPath]]


              flowchart TD
              bioimageio.spec.model.v0_5.RelativeFilePath[RelativeFilePath]
              bioimageio.spec._internal.io.RelativePathBase[RelativePathBase]

                              bioimageio.spec._internal.io.RelativePathBase --> bioimageio.spec.model.v0_5.RelativeFilePath
                


              click bioimageio.spec.model.v0_5.RelativeFilePath href "" "bioimageio.spec.model.v0_5.RelativeFilePath"
              click bioimageio.spec._internal.io.RelativePathBase href "" "bioimageio.spec._internal.io.RelativePathBase"
            

A path relative to the rdf.yaml file (also if the RDF source is a URL).

Methods:

Name Description
__repr__
__str__
absolute

get the absolute path/url

format
get_absolute
model_post_init

add validation @private

Attributes:

Name Type Description
path PurePath
suffix

path property ¤

path: PurePath

suffix property ¤

suffix

__repr__ ¤

__repr__() -> str
Source code in src/bioimageio/spec/_internal/io.py
152
153
def __repr__(self) -> str:
    return f"RelativePath('{self}')"

__str__ ¤

__str__() -> str
Source code in src/bioimageio/spec/_internal/io.py
149
150
def __str__(self) -> str:
    return self.root.as_posix()

absolute ¤

absolute() -> AbsolutePathT

get the absolute path/url

(resolved at time of initialization with the root of the ValidationContext)

Source code in src/bioimageio/spec/_internal/io.py
127
128
129
130
131
132
133
134
def absolute(  # method not property analog to `pathlib.Path.absolute()`
    self,
) -> AbsolutePathT:
    """get the absolute path/url

    (resolved at time of initialization with the root of the ValidationContext)
    """
    return self._absolute

format ¤

format() -> str
Source code in src/bioimageio/spec/_internal/io.py
155
156
157
@model_serializer()
def format(self) -> str:
    return str(self)

get_absolute ¤

get_absolute(
    root: "RootHttpUrl | Path | AnyUrl | ZipFile",
) -> "AbsoluteFilePath | HttpUrl | ZipPath"
Source code in src/bioimageio/spec/_internal/io.py
219
220
221
222
223
224
225
226
227
228
229
230
231
def get_absolute(
    self, root: "RootHttpUrl | Path | AnyUrl | ZipFile"
) -> "AbsoluteFilePath | HttpUrl | ZipPath":
    absolute = self._get_absolute_impl(root)
    if (
        isinstance(absolute, Path)
        and (context := get_validation_context()).perform_io_checks
        and str(self.root) not in context.known_files
        and not absolute.is_file()
    ):
        raise ValueError(f"{absolute} does not point to an existing file")

    return absolute

model_post_init ¤

model_post_init(__context: Any) -> None

add validation @private

Source code in src/bioimageio/spec/_internal/io.py
212
213
214
215
216
217
def model_post_init(self, __context: Any) -> None:
    """add validation @private"""
    if not self.root.parts:  # an empty path can only be a directory
        raise ValueError(f"{self.root} is not a valid file path.")

    super().model_post_init(__context)

ReproducibilityTolerance pydantic-model ¤

Bases: Node

Describes what small numerical differences -- if any -- may be tolerated in the generated output when executing in different environments.

A tensor element output is considered mismatched to the test_tensor if abs(output - test_tensor) > absolute_tolerance + relative_tolerance * abs(test_tensor). (Internally we call numpy.testing.assert_allclose.)

Motivation

For testing we can request the respective deep learning frameworks to be as reproducible as possible by setting seeds and chosing deterministic algorithms, but differences in operating systems, available hardware and installed drivers may still lead to numerical differences.

Show JSON schema:
{
  "additionalProperties": true,
  "description": "Describes what small numerical differences -- if any -- may be tolerated\nin the generated output when executing in different environments.\n\nA tensor element *output* is considered mismatched to the **test_tensor** if\nabs(*output* - **test_tensor**) > **absolute_tolerance** + **relative_tolerance** * abs(**test_tensor**).\n(Internally we call [numpy.testing.assert_allclose](https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html).)\n\nMotivation:\n    For testing we can request the respective deep learning frameworks to be as\n    reproducible as possible by setting seeds and chosing deterministic algorithms,\n    but differences in operating systems, available hardware and installed drivers\n    may still lead to numerical differences.",
  "properties": {
    "relative_tolerance": {
      "default": 0.001,
      "description": "Maximum relative tolerance of reproduced test tensor.",
      "maximum": 0.01,
      "minimum": 0,
      "title": "Relative Tolerance",
      "type": "number"
    },
    "absolute_tolerance": {
      "default": 0.001,
      "description": "Maximum absolute tolerance of reproduced test tensor.",
      "minimum": 0,
      "title": "Absolute Tolerance",
      "type": "number"
    },
    "mismatched_elements_per_million": {
      "default": 100,
      "description": "Maximum number of mismatched elements/pixels per million to tolerate.",
      "maximum": 1000,
      "minimum": 0,
      "title": "Mismatched Elements Per Million",
      "type": "integer"
    },
    "output_ids": {
      "default": [],
      "description": "Limits the output tensor IDs these reproducibility details apply to.",
      "items": {
        "maxLength": 32,
        "minLength": 1,
        "title": "TensorId",
        "type": "string"
      },
      "title": "Output Ids",
      "type": "array"
    },
    "weights_formats": {
      "default": [],
      "description": "Limits the weights formats these details apply to.",
      "items": {
        "enum": [
          "keras_hdf5",
          "keras_v3",
          "onnx",
          "pytorch_state_dict",
          "tensorflow_js",
          "tensorflow_saved_model_bundle",
          "torchscript"
        ],
        "type": "string"
      },
      "title": "Weights Formats",
      "type": "array"
    }
  },
  "title": "model.v0_5.ReproducibilityTolerance",
  "type": "object"
}

Fields:

absolute_tolerance pydantic-field ¤

absolute_tolerance: AbsoluteTolerance = 0.001

Maximum absolute tolerance of reproduced test tensor.

mismatched_elements_per_million pydantic-field ¤

mismatched_elements_per_million: MismatchedElementsPerMillion = 100

Maximum number of mismatched elements/pixels per million to tolerate.

output_ids pydantic-field ¤

output_ids: Sequence[TensorId] = ()

Limits the output tensor IDs these reproducibility details apply to.

relative_tolerance pydantic-field ¤

relative_tolerance: RelativeTolerance = 0.001

Maximum relative tolerance of reproduced test tensor.

weights_formats pydantic-field ¤

weights_formats: Sequence[WeightsFormat] = ()

Limits the weights formats these details apply to.

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

ResourceId ¤

Bases: ValidatedString


              flowchart TD
              bioimageio.spec.model.v0_5.ResourceId[ResourceId]
              bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]

                              bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.ResourceId
                


              click bioimageio.spec.model.v0_5.ResourceId href "" "bioimageio.spec.model.v0_5.ResourceId"
              click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
            

Methods:

Name Description
__get_pydantic_core_schema__
__get_pydantic_json_schema__
__new__

Attributes:

Name Type Description
root_model Type[RootModel[Any]]

the pydantic root model to validate the string

root_model class-attribute ¤

root_model: Type[RootModel[Any]] = RootModel[
    Annotated[
        NotEmpty[str],
        RestrictCharacters(
            string.ascii_lowercase + string.digits + "_-/."
        ),
        annotated_types.Predicate(
            lambda s: (
                not (s.startswith("/") or s.endswith("/"))
            )
        ),
    ]
]

the pydantic root model to validate the string

__get_pydantic_core_schema__ classmethod ¤

__get_pydantic_core_schema__(
    source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29
30
31
32
33
@classmethod
def __get_pydantic_core_schema__(
    cls, source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema:
    return no_info_after_validator_function(cls, handler(str))

__get_pydantic_json_schema__ classmethod ¤

__get_pydantic_json_schema__(
    core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35
36
37
38
39
40
41
42
43
44
@classmethod
def __get_pydantic_json_schema__(
    cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue:
    json_schema = cls.root_model.model_json_schema(mode=handler.mode)
    json_schema["title"] = cls.__name__.strip("_")
    if cls.__doc__:
        json_schema["description"] = cls.__doc__

    return json_schema

__new__ ¤

__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19
20
21
22
23
def __new__(cls, object: object):
    _validated = cls.root_model.model_validate(object).root
    self = super().__new__(cls, _validated)
    self._validated = _validated
    return self._after_validator()

RunMode pydantic-model ¤

Bases: Node

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "name": {
      "anyOf": [
        {
          "const": "deepimagej",
          "type": "string"
        },
        {
          "type": "string"
        }
      ],
      "description": "Run mode name",
      "title": "Name"
    },
    "kwargs": {
      "additionalProperties": true,
      "description": "Run mode specific key word arguments",
      "title": "Kwargs",
      "type": "object"
    }
  },
  "required": [
    "name"
  ],
  "title": "model.v0_4.RunMode",
  "type": "object"
}

Fields:

kwargs pydantic-field ¤

kwargs: Dict[str, Any]

Run mode specific key word arguments

name pydantic-field ¤

name: Annotated[
    Union[KnownRunMode, str],
    warn(KnownRunMode, "Unknown run mode '{value}'."),
]

Run mode name

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

ScaleLinearAlongAxisKwargs pydantic-model ¤

Bases: KwargsNode

Key word arguments for ScaleLinearDescr

Show JSON schema:
{
  "additionalProperties": false,
  "description": "Key word arguments for [ScaleLinearDescr][]",
  "properties": {
    "axis": {
      "description": "The axis of gain and offset values.",
      "examples": [
        "channel"
      ],
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "gain": {
      "anyOf": [
        {
          "type": "number"
        },
        {
          "items": {
            "type": "number"
          },
          "minItems": 1,
          "type": "array"
        }
      ],
      "default": 1.0,
      "description": "multiplicative factor",
      "title": "Gain"
    },
    "offset": {
      "anyOf": [
        {
          "type": "number"
        },
        {
          "items": {
            "type": "number"
          },
          "minItems": 1,
          "type": "array"
        }
      ],
      "default": 0.0,
      "description": "additive term",
      "title": "Offset"
    }
  },
  "required": [
    "axis"
  ],
  "title": "model.v0_5.ScaleLinearAlongAxisKwargs",
  "type": "object"
}

Fields:

Validators:

  • _validate

axis pydantic-field ¤

axis: Annotated[NonBatchAxisId, Field(examples=["channel"])]

The axis of gain and offset values.

gain pydantic-field ¤

gain: Union[float, NotEmpty[List[float]]] = 1.0

multiplicative factor

offset pydantic-field ¤

offset: Union[float, NotEmpty[List[float]]] = 0.0

additive term

__contains__ ¤

__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
459
460
def __contains__(self, item: str) -> bool:
    return item in self.__class__.model_fields

__getitem__ ¤

__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
453
454
455
456
457
def __getitem__(self, item: str) -> Any:
    if item in self.__class__.model_fields:
        return getattr(self, item)
    else:
        raise KeyError(item)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get ¤

get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
450
451
def get(self, item: str, default: Any = None) -> Any:
    return self[item] if item in self else default

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

ScaleLinearDescr pydantic-model ¤

Bases: NodeWithExplicitlySetFields

Fixed linear scaling.

Examples:

  1. Scale with scalar gain and offset
  2. in YAML
    preprocessing:
      - id: scale_linear
        kwargs:
          gain: 2.0
          offset: 3.0
    
  3. in Python:

preprocessing = [ ... ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0)) ... ]

  1. Independent scaling along an axis
  2. in YAML
    preprocessing:
      - id: scale_linear
        kwargs:
          axis: 'channel'
          gain: [1.0, 2.0, 3.0]
    
  3. in Python:

preprocessing = [ ... ScaleLinearDescr( ... kwargs=ScaleLinearAlongAxisKwargs( ... axis=AxisId("channel"), ... gain=[1.0, 2.0, 3.0], ... ) ... ) ... ]

Show JSON schema:
{
  "$defs": {
    "ScaleLinearAlongAxisKwargs": {
      "additionalProperties": false,
      "description": "Key word arguments for [ScaleLinearDescr][]",
      "properties": {
        "axis": {
          "description": "The axis of gain and offset values.",
          "examples": [
            "channel"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "gain": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "items": {
                "type": "number"
              },
              "minItems": 1,
              "type": "array"
            }
          ],
          "default": 1.0,
          "description": "multiplicative factor",
          "title": "Gain"
        },
        "offset": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "items": {
                "type": "number"
              },
              "minItems": 1,
              "type": "array"
            }
          ],
          "default": 0.0,
          "description": "additive term",
          "title": "Offset"
        }
      },
      "required": [
        "axis"
      ],
      "title": "model.v0_5.ScaleLinearAlongAxisKwargs",
      "type": "object"
    },
    "ScaleLinearKwargs": {
      "additionalProperties": false,
      "description": "Key word arguments for [ScaleLinearDescr][]",
      "properties": {
        "gain": {
          "default": 1.0,
          "description": "multiplicative factor",
          "title": "Gain",
          "type": "number"
        },
        "offset": {
          "default": 0.0,
          "description": "additive term",
          "title": "Offset",
          "type": "number"
        }
      },
      "title": "model.v0_5.ScaleLinearKwargs",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "description": "Fixed linear scaling.\n\nExamples:\n  1. Scale with scalar gain and offset\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_linear\n        kwargs:\n          gain: 2.0\n          offset: 3.0\n    ```\n    - in Python:\n\n    >>> preprocessing = [\n    ...     ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0))\n    ... ]\n\n  2. Independent scaling along an axis\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_linear\n        kwargs:\n          axis: 'channel'\n          gain: [1.0, 2.0, 3.0]\n    ```\n    - in Python:\n\n    >>> preprocessing = [\n    ...     ScaleLinearDescr(\n    ...         kwargs=ScaleLinearAlongAxisKwargs(\n    ...             axis=AxisId(\"channel\"),\n    ...             gain=[1.0, 2.0, 3.0],\n    ...         )\n    ...     )\n    ... ]",
  "properties": {
    "id": {
      "const": "scale_linear",
      "title": "Id",
      "type": "string"
    },
    "kwargs": {
      "anyOf": [
        {
          "$ref": "#/$defs/ScaleLinearKwargs"
        },
        {
          "$ref": "#/$defs/ScaleLinearAlongAxisKwargs"
        }
      ],
      "title": "Kwargs"
    }
  },
  "required": [
    "id",
    "kwargs"
  ],
  "title": "model.v0_5.ScaleLinearDescr",
  "type": "object"
}

Fields:

id pydantic-field ¤

id: Literal['scale_linear'] = 'scale_linear'

implemented_id class-attribute ¤

implemented_id: Literal['scale_linear'] = 'scale_linear'

kwargs pydantic-field ¤

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

ScaleLinearKwargs pydantic-model ¤

Bases: KwargsNode

Key word arguments for ScaleLinearDescr

Show JSON schema:
{
  "additionalProperties": false,
  "description": "Key word arguments for [ScaleLinearDescr][]",
  "properties": {
    "gain": {
      "default": 1.0,
      "description": "multiplicative factor",
      "title": "Gain",
      "type": "number"
    },
    "offset": {
      "default": 0.0,
      "description": "additive term",
      "title": "Offset",
      "type": "number"
    }
  },
  "title": "model.v0_5.ScaleLinearKwargs",
  "type": "object"
}

Fields:

Validators:

  • _validate

gain pydantic-field ¤

gain: float = 1.0

multiplicative factor

offset pydantic-field ¤

offset: float = 0.0

additive term

__contains__ ¤

__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
459
460
def __contains__(self, item: str) -> bool:
    return item in self.__class__.model_fields

__getitem__ ¤

__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
453
454
455
456
457
def __getitem__(self, item: str) -> Any:
    if item in self.__class__.model_fields:
        return getattr(self, item)
    else:
        raise KeyError(item)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get ¤

get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
450
451
def get(self, item: str, default: Any = None) -> Any:
    return self[item] if item in self else default

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

ScaleMeanVarianceDescr pydantic-model ¤

Bases: NodeWithExplicitlySetFields

Scale a tensor's data distribution to match another tensor's mean/std. out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.

Show JSON schema:
{
  "$defs": {
    "ScaleMeanVarianceKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ScaleMeanVarianceKwargs][]",
      "properties": {
        "reference_tensor": {
          "description": "ID of unprocessed input tensor to match.",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        },
        "eps": {
          "default": 1e-06,
          "description": "Epsilon for numeric stability:\n`out  = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
          "exclusiveMinimum": 0,
          "maximum": 0.1,
          "title": "Eps",
          "type": "number"
        }
      },
      "required": [
        "reference_tensor"
      ],
      "title": "model.v0_5.ScaleMeanVarianceKwargs",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "description": "Scale a tensor's data distribution to match another tensor's mean/std.\n`out  = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
  "properties": {
    "id": {
      "const": "scale_mean_variance",
      "title": "Id",
      "type": "string"
    },
    "kwargs": {
      "$ref": "#/$defs/ScaleMeanVarianceKwargs"
    }
  },
  "required": [
    "id",
    "kwargs"
  ],
  "title": "model.v0_5.ScaleMeanVarianceDescr",
  "type": "object"
}

Fields:

id pydantic-field ¤

id: Literal['scale_mean_variance'] = 'scale_mean_variance'

implemented_id class-attribute ¤

implemented_id: Literal["scale_mean_variance"] = (
    "scale_mean_variance"
)

kwargs pydantic-field ¤

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

ScaleMeanVarianceKwargs pydantic-model ¤

Bases: KwargsNode

key word arguments for ScaleMeanVarianceKwargs

Show JSON schema:
{
  "additionalProperties": false,
  "description": "key word arguments for [ScaleMeanVarianceKwargs][]",
  "properties": {
    "reference_tensor": {
      "description": "ID of unprocessed input tensor to match.",
      "maxLength": 32,
      "minLength": 1,
      "title": "TensorId",
      "type": "string"
    },
    "axes": {
      "anyOf": [
        {
          "items": {
            "maxLength": 16,
            "minLength": 1,
            "title": "AxisId",
            "type": "string"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
      "examples": [
        [
          "batch",
          "x",
          "y"
        ]
      ],
      "title": "Axes"
    },
    "eps": {
      "default": 1e-06,
      "description": "Epsilon for numeric stability:\n`out  = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
      "exclusiveMinimum": 0,
      "maximum": 0.1,
      "title": "Eps",
      "type": "number"
    }
  },
  "required": [
    "reference_tensor"
  ],
  "title": "model.v0_5.ScaleMeanVarianceKwargs",
  "type": "object"
}

Fields:

axes pydantic-field ¤

axes: Annotated[
    Optional[Sequence[AxisId]],
    Field(examples=[("batch", "x", "y")]),
] = None

The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std. For example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x') resulting in a tensor of equal shape normalized per channel, specify axes=('batch', 'x', 'y'). To normalize samples independently, leave out the 'batch' axis. Default: Scale all axes jointly.

eps pydantic-field ¤

eps: Annotated[float, Interval(gt=0, le=0.1)] = 1e-06

Epsilon for numeric stability: out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.

reference_tensor pydantic-field ¤

reference_tensor: TensorId

ID of unprocessed input tensor to match.

__contains__ ¤

__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
459
460
def __contains__(self, item: str) -> bool:
    return item in self.__class__.model_fields

__getitem__ ¤

__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
453
454
455
456
457
def __getitem__(self, item: str) -> Any:
    if item in self.__class__.model_fields:
        return getattr(self, item)
    else:
        raise KeyError(item)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get ¤

get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
450
451
def get(self, item: str, default: Any = None) -> Any:
    return self[item] if item in self else default

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

ScaleRangeDescr pydantic-model ¤

Bases: NodeWithExplicitlySetFields

Scale with percentiles.

Examples:

  1. Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0

    • in YAML
      preprocessing:
        - id: scale_range
          kwargs:
            axes: ['y', 'x']
            max_percentile: 99.8
            min_percentile: 5.0
      
    • in Python
    >>> preprocessing = [
    ...     ScaleRangeDescr(
    ...         kwargs=ScaleRangeKwargs(
    ...           axes= (AxisId('y'), AxisId('x')),
    ...           max_percentile= 99.8,
    ...           min_percentile= 5.0,
    ...         )
    ...     )
    ... ]
    
  2. Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.

    • in YAML
      preprocessing:
        - id: scale_range
          kwargs:
            axes: ['y', 'x']
            max_percentile: 99.8
            min_percentile: 5.0
         - id: clip
           kwargs:
            min: 0.0
            max: 1.0
      
    • in Python
    >>> preprocessing = [
    ...   ScaleRangeDescr(
    ...     kwargs=ScaleRangeKwargs(
    ...       axes= (AxisId('y'), AxisId('x')),
    ...       max_percentile= 99.8,
    ...       min_percentile= 5.0,
    ...     )
    ...   ),
    ...   ClipDescr(
    ...     kwargs=ClipKwargs(
    ...       min=0.0,
    ...       max=1.0,
    ...     )
    ...   ),
    ... ]
    
Show JSON schema:
{
  "$defs": {
    "ScaleRangeKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ScaleRangeDescr][]\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
      "properties": {
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        },
        "min_percentile": {
          "default": 0.0,
          "description": "The lower percentile used to determine the value to align with zero.",
          "exclusiveMaximum": 100,
          "minimum": 0,
          "title": "Min Percentile",
          "type": "number"
        },
        "max_percentile": {
          "default": 100.0,
          "description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
          "exclusiveMinimum": 1,
          "maximum": 100,
          "title": "Max Percentile",
          "type": "number"
        },
        "eps": {
          "default": 1e-06,
          "description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
          "exclusiveMinimum": 0,
          "maximum": 0.1,
          "title": "Eps",
          "type": "number"
        },
        "reference_tensor": {
          "anyOf": [
            {
              "maxLength": 32,
              "minLength": 1,
              "title": "TensorId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "ID of the unprocessed input tensor to compute the percentiles from.\nDefault: The tensor itself.",
          "title": "Reference Tensor"
        }
      },
      "title": "model.v0_5.ScaleRangeKwargs",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "description": "Scale with percentiles.\n\nExamples:\n1. Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_range\n        kwargs:\n          axes: ['y', 'x']\n          max_percentile: 99.8\n          min_percentile: 5.0\n    ```\n    - in Python\n\n    >>> preprocessing = [\n    ...     ScaleRangeDescr(\n    ...         kwargs=ScaleRangeKwargs(\n    ...           axes= (AxisId('y'), AxisId('x')),\n    ...           max_percentile= 99.8,\n    ...           min_percentile= 5.0,\n    ...         )\n    ...     )\n    ... ]\n\n  2. Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: scale_range\n        kwargs:\n          axes: ['y', 'x']\n          max_percentile: 99.8\n          min_percentile: 5.0\n       - id: clip\n         kwargs:\n          min: 0.0\n          max: 1.0\n    ```\n    - in Python\n\n    >>> preprocessing = [\n    ...   ScaleRangeDescr(\n    ...     kwargs=ScaleRangeKwargs(\n    ...       axes= (AxisId('y'), AxisId('x')),\n    ...       max_percentile= 99.8,\n    ...       min_percentile= 5.0,\n    ...     )\n    ...   ),\n    ...   ClipDescr(\n    ...     kwargs=ClipKwargs(\n    ...       min=0.0,\n    ...       max=1.0,\n    ...     )\n    ...   ),\n    ... ]",
  "properties": {
    "id": {
      "const": "scale_range",
      "title": "Id",
      "type": "string"
    },
    "kwargs": {
      "$ref": "#/$defs/ScaleRangeKwargs"
    }
  },
  "required": [
    "id"
  ],
  "title": "model.v0_5.ScaleRangeDescr",
  "type": "object"
}

Fields:

id pydantic-field ¤

id: Literal['scale_range'] = 'scale_range'

implemented_id class-attribute ¤

implemented_id: Literal['scale_range'] = 'scale_range'

kwargs pydantic-field ¤

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

ScaleRangeKwargs pydantic-model ¤

Bases: KwargsNode

key word arguments for ScaleRangeDescr

For min_percentile=0.0 (the default) and max_percentile=100 (the default) this processing step normalizes data to the [0, 1] intervall. For other percentiles the normalized values will partially be outside the [0, 1] intervall. Use ScaleRange followed by ClipDescr if you want to limit the normalized values to a range.

Show JSON schema:
{
  "additionalProperties": false,
  "description": "key word arguments for [ScaleRangeDescr][]\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
  "properties": {
    "axes": {
      "anyOf": [
        {
          "items": {
            "maxLength": 16,
            "minLength": 1,
            "title": "AxisId",
            "type": "string"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
      "examples": [
        [
          "batch",
          "x",
          "y"
        ]
      ],
      "title": "Axes"
    },
    "min_percentile": {
      "default": 0.0,
      "description": "The lower percentile used to determine the value to align with zero.",
      "exclusiveMaximum": 100,
      "minimum": 0,
      "title": "Min Percentile",
      "type": "number"
    },
    "max_percentile": {
      "default": 100.0,
      "description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
      "exclusiveMinimum": 1,
      "maximum": 100,
      "title": "Max Percentile",
      "type": "number"
    },
    "eps": {
      "default": 1e-06,
      "description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
      "exclusiveMinimum": 0,
      "maximum": 0.1,
      "title": "Eps",
      "type": "number"
    },
    "reference_tensor": {
      "anyOf": [
        {
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "ID of the unprocessed input tensor to compute the percentiles from.\nDefault: The tensor itself.",
      "title": "Reference Tensor"
    }
  },
  "title": "model.v0_5.ScaleRangeKwargs",
  "type": "object"
}

Fields:

Validators:

axes pydantic-field ¤

axes: Annotated[
    Optional[Sequence[AxisId]],
    Field(examples=[("batch", "x", "y")]),
] = None

The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value. For example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x') resulting in a tensor of equal shape normalized per channel, specify axes=('batch', 'x', 'y'). To normalize samples independently, leave out the "batch" axis. Default: Scale all axes jointly.

eps pydantic-field ¤

eps: Annotated[float, Interval(gt=0, le=0.1)] = 1e-06

Epsilon for numeric stability. out = (tensor - v_lower) / (v_upper - v_lower + eps); with v_lower,v_upper values at the respective percentiles.

max_percentile pydantic-field ¤

max_percentile: Annotated[float, Interval(gt=1, le=100)] = (
    100.0
)

The upper percentile used to determine the value to align with one. Has to be bigger than min_percentile. The range is 1 to 100 instead of 0 to 100 to avoid mistakenly accepting percentiles specified in the range 0.0 to 1.0.

min_percentile pydantic-field ¤

min_percentile: Annotated[float, Interval(ge=0, lt=100)] = (
    0.0
)

The lower percentile used to determine the value to align with zero.

reference_tensor pydantic-field ¤

reference_tensor: Optional[TensorId] = None

ID of the unprocessed input tensor to compute the percentiles from. Default: The tensor itself.

__contains__ ¤

__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
459
460
def __contains__(self, item: str) -> bool:
    return item in self.__class__.model_fields

__getitem__ ¤

__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
453
454
455
456
457
def __getitem__(self, item: str) -> Any:
    if item in self.__class__.model_fields:
        return getattr(self, item)
    else:
        raise KeyError(item)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get ¤

get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
450
451
def get(self, item: str, default: Any = None) -> Any:
    return self[item] if item in self else default

min_smaller_max pydantic-validator ¤

min_smaller_max(
    value: float, info: ValidationInfo
) -> float
Source code in src/bioimageio/spec/model/v0_5.py
1537
1538
1539
1540
1541
1542
1543
@field_validator("max_percentile", mode="after")
@classmethod
def min_smaller_max(cls, value: float, info: ValidationInfo) -> float:
    if (min_p := info.data["min_percentile"]) >= value:
        raise ValueError(f"min_percentile {min_p} >= max_percentile {value}")

    return value

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

Sha256 ¤

Bases: ValidatedString


              flowchart TD
              bioimageio.spec.model.v0_5.Sha256[Sha256]
              bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]

                              bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.Sha256
                


              click bioimageio.spec.model.v0_5.Sha256 href "" "bioimageio.spec.model.v0_5.Sha256"
              click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
            

A SHA-256 hash value

Methods:

Name Description
__get_pydantic_core_schema__
__get_pydantic_json_schema__
__new__

Attributes:

Name Type Description
root_model Type[RootModel[Any]]

the pydantic root model to validate the string

root_model class-attribute ¤

root_model: Type[RootModel[Any]] = RootModel[
    Annotated[
        str,
        StringConstraints(
            strip_whitespace=True,
            to_lower=True,
            min_length=64,
            max_length=64,
        ),
    ]
]

the pydantic root model to validate the string

__get_pydantic_core_schema__ classmethod ¤

__get_pydantic_core_schema__(
    source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29
30
31
32
33
@classmethod
def __get_pydantic_core_schema__(
    cls, source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema:
    return no_info_after_validator_function(cls, handler(str))

__get_pydantic_json_schema__ classmethod ¤

__get_pydantic_json_schema__(
    core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35
36
37
38
39
40
41
42
43
44
@classmethod
def __get_pydantic_json_schema__(
    cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue:
    json_schema = cls.root_model.model_json_schema(mode=handler.mode)
    json_schema["title"] = cls.__name__.strip("_")
    if cls.__doc__:
        json_schema["description"] = cls.__doc__

    return json_schema

__new__ ¤

__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19
20
21
22
23
def __new__(cls, object: object):
    _validated = cls.root_model.model_validate(object).root
    self = super().__new__(cls, _validated)
    self._validated = _validated
    return self._after_validator()

SiUnit ¤

Bases: ValidatedString


              flowchart TD
              bioimageio.spec.model.v0_5.SiUnit[SiUnit]
              bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]

                              bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.SiUnit
                


              click bioimageio.spec.model.v0_5.SiUnit href "" "bioimageio.spec.model.v0_5.SiUnit"
              click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
            

An SI unit

Methods:

Name Description
__get_pydantic_core_schema__
__get_pydantic_json_schema__
__new__

Attributes:

Name Type Description
root_model Type[RootModel[Any]]

the pydantic root model to validate the string

root_model class-attribute ¤

root_model: Type[RootModel[Any]] = RootModel[
    Annotated[
        str,
        StringConstraints(
            min_length=1, pattern=SI_UNIT_REGEX
        ),
        BeforeValidator(_normalize_multiplication),
    ]
]

the pydantic root model to validate the string

__get_pydantic_core_schema__ classmethod ¤

__get_pydantic_core_schema__(
    source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29
30
31
32
33
@classmethod
def __get_pydantic_core_schema__(
    cls, source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema:
    return no_info_after_validator_function(cls, handler(str))

__get_pydantic_json_schema__ classmethod ¤

__get_pydantic_json_schema__(
    core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35
36
37
38
39
40
41
42
43
44
@classmethod
def __get_pydantic_json_schema__(
    cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue:
    json_schema = cls.root_model.model_json_schema(mode=handler.mode)
    json_schema["title"] = cls.__name__.strip("_")
    if cls.__doc__:
        json_schema["description"] = cls.__doc__

    return json_schema

__new__ ¤

__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19
20
21
22
23
def __new__(cls, object: object):
    _validated = cls.root_model.model_validate(object).root
    self = super().__new__(cls, _validated)
    self._validated = _validated
    return self._after_validator()

SigmoidDescr pydantic-model ¤

Bases: NodeWithExplicitlySetFields

The logistic sigmoid function, a.k.a. expit function.

Examples:

  • in YAML
    postprocessing:
      - id: sigmoid
    
  • in Python:

    >>> postprocessing = [SigmoidDescr()]
    
Show JSON schema:
{
  "additionalProperties": false,
  "description": "The logistic sigmoid function, a.k.a. expit function.\n\nExamples:\n- in YAML\n    ```yaml\n    postprocessing:\n      - id: sigmoid\n    ```\n- in Python:\n\n    >>> postprocessing = [SigmoidDescr()]",
  "properties": {
    "id": {
      "const": "sigmoid",
      "title": "Id",
      "type": "string"
    }
  },
  "required": [
    "id"
  ],
  "title": "model.v0_5.SigmoidDescr",
  "type": "object"
}

Fields:

  • id (Literal['sigmoid'])

id pydantic-field ¤

id: Literal['sigmoid'] = 'sigmoid'

implemented_id class-attribute ¤

implemented_id: Literal['sigmoid'] = 'sigmoid'

kwargs property ¤

kwargs: KwargsNode

empty kwargs

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

SizeReference pydantic-model ¤

Bases: Node

A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.

axis.size = reference.size * reference.scale / axis.scale + offset

Note: 1. The axis and the referenced axis need to have the same unit (or no unit). 2. Batch axes may not be referenced. 3. Fractions are rounded down. 4. If the reference axis is concatenable the referencing axis is assumed to be concatenable as well with the same block order.

Example: An unisotropic input image of wh=10049 pixels depicts a phsical space of 200196mm². Let's assume that we want to express the image height h in relation to its width w instead of only accepting input images of exactly 10049 pixels (for example to express a range of valid image shapes by parametrizing w, see ParameterizedSize).

>>> w = SpaceInputAxis(id=AxisId("w"), size=100, unit="millimeter", scale=2)
>>> h = SpaceInputAxis(
...     id=AxisId("h"),
...     size=SizeReference(tensor_id=TensorId("input"), axis_id=AxisId("w"), offset=-1),
...     unit="millimeter",
...     scale=4,
... )
>>> print(h.size.get_size(h, w))
49

⇒ h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49

Show JSON schema:
{
  "additionalProperties": false,
  "description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n    `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n...     id=AxisId(\"h\"),\n...     size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n...     unit=\"millimeter\",\n...     scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
  "properties": {
    "tensor_id": {
      "description": "tensor id of the reference axis",
      "maxLength": 32,
      "minLength": 1,
      "title": "TensorId",
      "type": "string"
    },
    "axis_id": {
      "description": "axis id of the reference axis",
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "offset": {
      "default": 0,
      "title": "Offset",
      "type": "integer"
    }
  },
  "required": [
    "tensor_id",
    "axis_id"
  ],
  "title": "model.v0_5.SizeReference",
  "type": "object"
}

Fields:

axis_id pydantic-field ¤

axis_id: AxisId

axis id of the reference axis

offset pydantic-field ¤

offset: StrictInt = 0

tensor_id pydantic-field ¤

tensor_id: TensorId

tensor id of the reference axis

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get_size ¤

Compute the concrete size for a given axis and its reference axis.

Parameters:

Name Type Description Default

axis ¤

Union[ChannelAxis, IndexInputAxis, IndexOutputAxis, TimeInputAxis, SpaceInputAxis, TimeOutputAxis, TimeOutputAxisWithHalo, SpaceOutputAxis, SpaceOutputAxisWithHalo]

The axis this SizeReference is the size of.

required

ref_axis ¤

Union[ChannelAxis, IndexInputAxis, IndexOutputAxis, TimeInputAxis, SpaceInputAxis, TimeOutputAxis, TimeOutputAxisWithHalo, SpaceOutputAxis, SpaceOutputAxisWithHalo]

The reference axis to compute the size from.

required

n ¤

ParameterizedSize_N

If the ref_axis is parameterized (of type ParameterizedSize) and no fixed ref_size is given, n is used to compute the size of the parameterized ref_axis.

0

ref_size ¤

Optional[int]

Overwrite the reference size instead of deriving it from ref_axis (ref_axis.scale is still used; any given n is ignored).

None
Source code in src/bioimageio/spec/model/v0_5.py
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
def get_size(
    self,
    axis: Union[
        ChannelAxis,
        IndexInputAxis,
        IndexOutputAxis,
        TimeInputAxis,
        SpaceInputAxis,
        TimeOutputAxis,
        TimeOutputAxisWithHalo,
        SpaceOutputAxis,
        SpaceOutputAxisWithHalo,
    ],
    ref_axis: Union[
        ChannelAxis,
        IndexInputAxis,
        IndexOutputAxis,
        TimeInputAxis,
        SpaceInputAxis,
        TimeOutputAxis,
        TimeOutputAxisWithHalo,
        SpaceOutputAxis,
        SpaceOutputAxisWithHalo,
    ],
    n: ParameterizedSize_N = 0,
    ref_size: Optional[int] = None,
):
    """Compute the concrete size for a given axis and its reference axis.

    Args:
        axis: The axis this [SizeReference][] is the size of.
        ref_axis: The reference axis to compute the size from.
        n: If the **ref_axis** is parameterized (of type `ParameterizedSize`)
            and no fixed **ref_size** is given,
            **n** is used to compute the size of the parameterized **ref_axis**.
        ref_size: Overwrite the reference size instead of deriving it from
            **ref_axis**
            (**ref_axis.scale** is still used; any given **n** is ignored).
    """
    assert axis.size == self, (
        "Given `axis.size` is not defined by this `SizeReference`"
    )

    assert ref_axis.id == self.axis_id, (
        f"Expected `ref_axis.id` to be {self.axis_id}, but got {ref_axis.id}."
    )

    assert axis.unit == ref_axis.unit, (
        "`SizeReference` requires `axis` and `ref_axis` to have the same `unit`,"
        f" but {axis.unit}!={ref_axis.unit}"
    )
    if ref_size is None:
        if isinstance(ref_axis.size, (int, float)):
            ref_size = ref_axis.size
        elif isinstance(ref_axis.size, ParameterizedSize):
            ref_size = ref_axis.size.get_size(n)
        elif isinstance(ref_axis.size, DataDependentSize):
            raise ValueError(
                "Reference axis referenced in `SizeReference` may not be a `DataDependentSize`."
            )
        elif isinstance(ref_axis.size, SizeReference):
            raise ValueError(
                "Reference axis referenced in `SizeReference` may not be sized by a"
                + " `SizeReference` itself."
            )
        else:
            assert_never(ref_axis.size)

    return int(ref_size * ref_axis.scale / axis.scale + self.offset)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

SoftmaxDescr pydantic-model ¤

Bases: NodeWithExplicitlySetFields

The softmax function.

Examples:

  • in YAML
    postprocessing:
      - id: softmax
        kwargs:
          axis: channel
    
  • in Python:

    >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId("channel")))]
    
Show JSON schema:
{
  "$defs": {
    "SoftmaxKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [SoftmaxDescr][]",
      "properties": {
        "axis": {
          "default": "channel",
          "description": "The axis to apply the softmax function along.\nNote:\n    Defaults to 'channel' axis\n    (which may not exist, in which case\n    a different axis id has to be specified).",
          "examples": [
            "channel"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        }
      },
      "title": "model.v0_5.SoftmaxKwargs",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "description": "The softmax function.\n\nExamples:\n- in YAML\n    ```yaml\n    postprocessing:\n      - id: softmax\n        kwargs:\n          axis: channel\n    ```\n- in Python:\n\n    >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId(\"channel\")))]",
  "properties": {
    "id": {
      "const": "softmax",
      "title": "Id",
      "type": "string"
    },
    "kwargs": {
      "$ref": "#/$defs/SoftmaxKwargs"
    }
  },
  "required": [
    "id"
  ],
  "title": "model.v0_5.SoftmaxDescr",
  "type": "object"
}

Fields:

id pydantic-field ¤

id: Literal['softmax'] = 'softmax'

implemented_id class-attribute ¤

implemented_id: Literal['softmax'] = 'softmax'

kwargs pydantic-field ¤

kwargs: SoftmaxKwargs

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

SoftmaxKwargs pydantic-model ¤

Bases: KwargsNode

key word arguments for SoftmaxDescr

Show JSON schema:
{
  "additionalProperties": false,
  "description": "key word arguments for [SoftmaxDescr][]",
  "properties": {
    "axis": {
      "default": "channel",
      "description": "The axis to apply the softmax function along.\nNote:\n    Defaults to 'channel' axis\n    (which may not exist, in which case\n    a different axis id has to be specified).",
      "examples": [
        "channel"
      ],
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    }
  },
  "title": "model.v0_5.SoftmaxKwargs",
  "type": "object"
}

Fields:

axis pydantic-field ¤

axis: Annotated[NonBatchAxisId, Field(examples=["channel"])]

The axis to apply the softmax function along. Note: Defaults to 'channel' axis (which may not exist, in which case a different axis id has to be specified).

__contains__ ¤

__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
459
460
def __contains__(self, item: str) -> bool:
    return item in self.__class__.model_fields

__getitem__ ¤

__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
453
454
455
456
457
def __getitem__(self, item: str) -> Any:
    if item in self.__class__.model_fields:
        return getattr(self, item)
    else:
        raise KeyError(item)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get ¤

get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
450
451
def get(self, item: str, default: Any = None) -> Any:
    return self[item] if item in self else default

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

SpaceAxisBase pydantic-model ¤

Bases: AxisBase

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "id": {
      "default": "x",
      "examples": [
        "x",
        "y",
        "z"
      ],
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A short description of this axis beyond its type and id.",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "type": {
      "const": "space",
      "title": "Type",
      "type": "string"
    },
    "unit": {
      "anyOf": [
        {
          "enum": [
            "attometer",
            "angstrom",
            "centimeter",
            "decimeter",
            "exameter",
            "femtometer",
            "foot",
            "gigameter",
            "hectometer",
            "inch",
            "kilometer",
            "megameter",
            "meter",
            "micrometer",
            "mile",
            "millimeter",
            "nanometer",
            "parsec",
            "petameter",
            "picometer",
            "terameter",
            "yard",
            "yoctometer",
            "yottameter",
            "zeptometer",
            "zettameter"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Unit"
    },
    "scale": {
      "default": 1.0,
      "exclusiveMinimum": 0,
      "title": "Scale",
      "type": "number"
    }
  },
  "required": [
    "type"
  ],
  "title": "model.v0_5.SpaceAxisBase",
  "type": "object"
}

Fields:

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

A short description of this axis beyond its type and id.

id pydantic-field ¤

id: Annotated[
    NonBatchAxisId, Field(examples=["x", "y", "z"])
]

An axis id unique across all axes of one tensor.

implemented_type class-attribute ¤

implemented_type: Literal['space'] = 'space'

scale pydantic-field ¤

scale: Annotated[float, Gt(0)] = 1.0

type pydantic-field ¤

type: Literal['space'] = 'space'

unit pydantic-field ¤

unit: Optional[SpaceUnit] = None

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

SpaceInputAxis pydantic-model ¤

Bases: SpaceAxisBase, _WithInputAxisSize

Show JSON schema:
{
  "$defs": {
    "ParameterizedSize": {
      "additionalProperties": false,
      "description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n  This allows to adjust the axis size more generically.",
      "properties": {
        "min": {
          "exclusiveMinimum": 0,
          "title": "Min",
          "type": "integer"
        },
        "step": {
          "exclusiveMinimum": 0,
          "title": "Step",
          "type": "integer"
        }
      },
      "required": [
        "min",
        "step"
      ],
      "title": "model.v0_5.ParameterizedSize",
      "type": "object"
    },
    "SizeReference": {
      "additionalProperties": false,
      "description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n    `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n...     id=AxisId(\"h\"),\n...     size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n...     unit=\"millimeter\",\n...     scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
      "properties": {
        "tensor_id": {
          "description": "tensor id of the reference axis",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axis_id": {
          "description": "axis id of the reference axis",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "offset": {
          "default": 0,
          "title": "Offset",
          "type": "integer"
        }
      },
      "required": [
        "tensor_id",
        "axis_id"
      ],
      "title": "model.v0_5.SizeReference",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "properties": {
    "size": {
      "anyOf": [
        {
          "exclusiveMinimum": 0,
          "type": "integer"
        },
        {
          "$ref": "#/$defs/ParameterizedSize"
        },
        {
          "$ref": "#/$defs/SizeReference"
        }
      ],
      "description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
      "examples": [
        10,
        {
          "min": 32,
          "step": 16
        },
        {
          "axis_id": "a",
          "offset": 5,
          "tensor_id": "t"
        }
      ],
      "title": "Size"
    },
    "id": {
      "default": "x",
      "examples": [
        "x",
        "y",
        "z"
      ],
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A short description of this axis beyond its type and id.",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "type": {
      "const": "space",
      "title": "Type",
      "type": "string"
    },
    "unit": {
      "anyOf": [
        {
          "enum": [
            "attometer",
            "angstrom",
            "centimeter",
            "decimeter",
            "exameter",
            "femtometer",
            "foot",
            "gigameter",
            "hectometer",
            "inch",
            "kilometer",
            "megameter",
            "meter",
            "micrometer",
            "mile",
            "millimeter",
            "nanometer",
            "parsec",
            "petameter",
            "picometer",
            "terameter",
            "yard",
            "yoctometer",
            "yottameter",
            "zeptometer",
            "zettameter"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Unit"
    },
    "scale": {
      "default": 1.0,
      "exclusiveMinimum": 0,
      "title": "Scale",
      "type": "number"
    },
    "concatenable": {
      "default": false,
      "description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
      "title": "Concatenable",
      "type": "boolean"
    }
  },
  "required": [
    "size",
    "type"
  ],
  "title": "model.v0_5.SpaceInputAxis",
  "type": "object"
}

Fields:

concatenable pydantic-field ¤

concatenable: bool = False

If a model has a concatenable input axis, it can be processed blockwise, splitting a longer sample axis into blocks matching its input tensor description. Output axes are concatenable if they have a SizeReference to a concatenable input axis.

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

A short description of this axis beyond its type and id.

id pydantic-field ¤

id: Annotated[
    NonBatchAxisId, Field(examples=["x", "y", "z"])
]

An axis id unique across all axes of one tensor.

implemented_type class-attribute ¤

implemented_type: Literal['space'] = 'space'

scale pydantic-field ¤

scale: Annotated[float, Gt(0)] = 1.0

size pydantic-field ¤

size: Annotated[
    Union[
        Annotated[int, Gt(0)],
        ParameterizedSize,
        SizeReference,
    ],
    Field(
        examples=[
            10,
            ParameterizedSize(min=32, step=16).model_dump(
                mode="json"
            ),
            SizeReference(
                tensor_id=TensorId("t"),
                axis_id=AxisId("a"),
                offset=5,
            ).model_dump(mode="json"),
        ]
    ),
]

The size/length of this axis can be specified as - fixed integer - parameterized series of valid sizes (ParameterizedSize) - reference to another axis with an optional offset (SizeReference)

type pydantic-field ¤

type: Literal['space'] = 'space'

unit pydantic-field ¤

unit: Optional[SpaceUnit] = None

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

SpaceOutputAxis pydantic-model ¤

Bases: SpaceAxisBase, _WithOutputAxisSize

Show JSON schema:
{
  "$defs": {
    "SizeReference": {
      "additionalProperties": false,
      "description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n    `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n...     id=AxisId(\"h\"),\n...     size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n...     unit=\"millimeter\",\n...     scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
      "properties": {
        "tensor_id": {
          "description": "tensor id of the reference axis",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axis_id": {
          "description": "axis id of the reference axis",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "offset": {
          "default": 0,
          "title": "Offset",
          "type": "integer"
        }
      },
      "required": [
        "tensor_id",
        "axis_id"
      ],
      "title": "model.v0_5.SizeReference",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "properties": {
    "size": {
      "anyOf": [
        {
          "exclusiveMinimum": 0,
          "type": "integer"
        },
        {
          "$ref": "#/$defs/SizeReference"
        }
      ],
      "description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
      "examples": [
        10,
        {
          "axis_id": "a",
          "offset": 5,
          "tensor_id": "t"
        }
      ],
      "title": "Size"
    },
    "id": {
      "default": "x",
      "examples": [
        "x",
        "y",
        "z"
      ],
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A short description of this axis beyond its type and id.",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "type": {
      "const": "space",
      "title": "Type",
      "type": "string"
    },
    "unit": {
      "anyOf": [
        {
          "enum": [
            "attometer",
            "angstrom",
            "centimeter",
            "decimeter",
            "exameter",
            "femtometer",
            "foot",
            "gigameter",
            "hectometer",
            "inch",
            "kilometer",
            "megameter",
            "meter",
            "micrometer",
            "mile",
            "millimeter",
            "nanometer",
            "parsec",
            "petameter",
            "picometer",
            "terameter",
            "yard",
            "yoctometer",
            "yottameter",
            "zeptometer",
            "zettameter"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Unit"
    },
    "scale": {
      "default": 1.0,
      "exclusiveMinimum": 0,
      "title": "Scale",
      "type": "number"
    }
  },
  "required": [
    "size",
    "type"
  ],
  "title": "model.v0_5.SpaceOutputAxis",
  "type": "object"
}

Fields:

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

A short description of this axis beyond its type and id.

id pydantic-field ¤

id: Annotated[
    NonBatchAxisId, Field(examples=["x", "y", "z"])
]

An axis id unique across all axes of one tensor.

implemented_type class-attribute ¤

implemented_type: Literal['space'] = 'space'

scale pydantic-field ¤

scale: Annotated[float, Gt(0)] = 1.0

size pydantic-field ¤

size: Annotated[
    Union[Annotated[int, Gt(0)], SizeReference],
    Field(
        examples=[
            10,
            SizeReference(
                tensor_id=TensorId("t"),
                axis_id=AxisId("a"),
                offset=5,
            ).model_dump(mode="json"),
        ]
    ),
]

The size/length of this axis can be specified as - fixed integer - reference to another axis with an optional offset (see SizeReference)

type pydantic-field ¤

type: Literal['space'] = 'space'

unit pydantic-field ¤

unit: Optional[SpaceUnit] = None

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

SpaceOutputAxisWithHalo pydantic-model ¤

Bases: SpaceAxisBase, WithHalo

Show JSON schema:
{
  "$defs": {
    "SizeReference": {
      "additionalProperties": false,
      "description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n    `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n...     id=AxisId(\"h\"),\n...     size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n...     unit=\"millimeter\",\n...     scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
      "properties": {
        "tensor_id": {
          "description": "tensor id of the reference axis",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axis_id": {
          "description": "axis id of the reference axis",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "offset": {
          "default": 0,
          "title": "Offset",
          "type": "integer"
        }
      },
      "required": [
        "tensor_id",
        "axis_id"
      ],
      "title": "model.v0_5.SizeReference",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "properties": {
    "halo": {
      "description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
      "minimum": 1,
      "title": "Halo",
      "type": "integer"
    },
    "size": {
      "$ref": "#/$defs/SizeReference",
      "description": "reference to another axis with an optional offset (see [SizeReference][])",
      "examples": [
        10,
        {
          "axis_id": "a",
          "offset": 5,
          "tensor_id": "t"
        }
      ]
    },
    "id": {
      "default": "x",
      "examples": [
        "x",
        "y",
        "z"
      ],
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A short description of this axis beyond its type and id.",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "type": {
      "const": "space",
      "title": "Type",
      "type": "string"
    },
    "unit": {
      "anyOf": [
        {
          "enum": [
            "attometer",
            "angstrom",
            "centimeter",
            "decimeter",
            "exameter",
            "femtometer",
            "foot",
            "gigameter",
            "hectometer",
            "inch",
            "kilometer",
            "megameter",
            "meter",
            "micrometer",
            "mile",
            "millimeter",
            "nanometer",
            "parsec",
            "petameter",
            "picometer",
            "terameter",
            "yard",
            "yoctometer",
            "yottameter",
            "zeptometer",
            "zettameter"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Unit"
    },
    "scale": {
      "default": 1.0,
      "exclusiveMinimum": 0,
      "title": "Scale",
      "type": "number"
    }
  },
  "required": [
    "halo",
    "size",
    "type"
  ],
  "title": "model.v0_5.SpaceOutputAxisWithHalo",
  "type": "object"
}

Fields:

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

A short description of this axis beyond its type and id.

halo pydantic-field ¤

halo: Annotated[int, Ge(1)]

The halo should be cropped from the output tensor to avoid boundary effects. It is to be cropped from both sides, i.e. size_after_crop = size - 2 * halo. To document a halo that is already cropped by the model use size.offset instead.

id pydantic-field ¤

id: Annotated[
    NonBatchAxisId, Field(examples=["x", "y", "z"])
]

An axis id unique across all axes of one tensor.

implemented_type class-attribute ¤

implemented_type: Literal['space'] = 'space'

scale pydantic-field ¤

scale: Annotated[float, Gt(0)] = 1.0

size pydantic-field ¤

size: Annotated[
    SizeReference,
    Field(
        examples=[
            10,
            SizeReference(
                tensor_id=TensorId("t"),
                axis_id=AxisId("a"),
                offset=5,
            ).model_dump(mode="json"),
        ]
    ),
]

reference to another axis with an optional offset (see SizeReference)

type pydantic-field ¤

type: Literal['space'] = 'space'

unit pydantic-field ¤

unit: Optional[SpaceUnit] = None

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

StardistPostprocessingDescr pydantic-model ¤

Bases: NodeWithExplicitlySetFields

Stardist postprocessing including non-maximum suppression and converting polygon representations to instance labels

as described in: - Uwe Schmidt, Martin Weigert, Coleman Broaddus, and Gene Myers. Cell Detection with Star-convex Polygons. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Granada, Spain, September 2018. - Martin Weigert, Uwe Schmidt, Robert Haase, Ko Sugawara, and Gene Myers. Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy. The IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, Colorado, March 2020.

Note: Only available if the stardist package is installed.

Show JSON schema:
{
  "$defs": {
    "StardistPostprocessingKwargs2D": {
      "additionalProperties": false,
      "properties": {
        "prob_threshold": {
          "description": "The probability threshold for object candidate selection.",
          "title": "Prob Threshold",
          "type": "number"
        },
        "nms_threshold": {
          "description": "The IoU threshold for non-maximum suppression.",
          "title": "Nms Threshold",
          "type": "number"
        },
        "grid": {
          "description": "Grid size of network predictions.",
          "maxItems": 2,
          "minItems": 2,
          "prefixItems": [
            {
              "type": "integer"
            },
            {
              "type": "integer"
            }
          ],
          "title": "Grid",
          "type": "array"
        },
        "b": {
          "anyOf": [
            {
              "type": "integer"
            },
            {
              "maxItems": 2,
              "minItems": 2,
              "prefixItems": [
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                },
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                }
              ],
              "type": "array"
            }
          ],
          "description": "Border region in which object probability is set to zero.",
          "title": "B"
        }
      },
      "required": [
        "prob_threshold",
        "nms_threshold",
        "grid",
        "b"
      ],
      "title": "model.v0_5.StardistPostprocessingKwargs2D",
      "type": "object"
    },
    "StardistPostprocessingKwargs3D": {
      "additionalProperties": false,
      "properties": {
        "prob_threshold": {
          "description": "The probability threshold for object candidate selection.",
          "title": "Prob Threshold",
          "type": "number"
        },
        "nms_threshold": {
          "description": "The IoU threshold for non-maximum suppression.",
          "title": "Nms Threshold",
          "type": "number"
        },
        "grid": {
          "description": "Grid size of network predictions.",
          "maxItems": 3,
          "minItems": 3,
          "prefixItems": [
            {
              "type": "integer"
            },
            {
              "type": "integer"
            },
            {
              "type": "integer"
            }
          ],
          "title": "Grid",
          "type": "array"
        },
        "b": {
          "anyOf": [
            {
              "type": "integer"
            },
            {
              "maxItems": 3,
              "minItems": 3,
              "prefixItems": [
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                },
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                },
                {
                  "maxItems": 2,
                  "minItems": 2,
                  "prefixItems": [
                    {
                      "type": "integer"
                    },
                    {
                      "type": "integer"
                    }
                  ],
                  "type": "array"
                }
              ],
              "type": "array"
            }
          ],
          "description": "Border region in which object probability is set to zero.",
          "title": "B"
        },
        "n_rays": {
          "description": "Number of rays for 3D star-convex polyhedra.",
          "title": "N Rays",
          "type": "integer"
        },
        "anisotropy": {
          "description": "Anisotropy factors for 3D star-convex polyhedra, i.e. the physical pixel size along each spatial axis.",
          "maxItems": 3,
          "minItems": 3,
          "prefixItems": [
            {
              "type": "number"
            },
            {
              "type": "number"
            },
            {
              "type": "number"
            }
          ],
          "title": "Anisotropy",
          "type": "array"
        },
        "overlap_label": {
          "anyOf": [
            {
              "type": "integer"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Optional label to apply to any area of overlapping predicted objects.",
          "title": "Overlap Label"
        }
      },
      "required": [
        "prob_threshold",
        "nms_threshold",
        "grid",
        "b",
        "n_rays",
        "anisotropy"
      ],
      "title": "model.v0_5.StardistPostprocessingKwargs3D",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "description": "Stardist postprocessing including non-maximum suppression and converting polygon representations to instance labels\n\nas described in:\n- Uwe Schmidt, Martin Weigert, Coleman Broaddus, and Gene Myers.\n[*Cell Detection with Star-convex Polygons*](https://arxiv.org/abs/1806.03535).\nInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Granada, Spain, September 2018.\n- Martin Weigert, Uwe Schmidt, Robert Haase, Ko Sugawara, and Gene Myers.\n[*Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy*](http://openaccess.thecvf.com/content_WACV_2020/papers/Weigert_Star-convex_Polyhedra_for_3D_Object_Detection_and_Segmentation_in_Microscopy_WACV_2020_paper.pdf).\nThe IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, Colorado, March 2020.\n\nNote: Only available if the `stardist` package is installed.",
  "properties": {
    "id": {
      "const": "stardist_postprocessing",
      "title": "Id",
      "type": "string"
    },
    "kwargs": {
      "anyOf": [
        {
          "$ref": "#/$defs/StardistPostprocessingKwargs2D"
        },
        {
          "$ref": "#/$defs/StardistPostprocessingKwargs3D"
        }
      ],
      "title": "Kwargs"
    }
  },
  "required": [
    "id",
    "kwargs"
  ],
  "title": "model.v0_5.StardistPostprocessingDescr",
  "type": "object"
}

Fields:

id pydantic-field ¤

id: Literal["stardist_postprocessing"] = (
    "stardist_postprocessing"
)

implemented_id class-attribute ¤

implemented_id: Literal["stardist_postprocessing"] = (
    "stardist_postprocessing"
)

kwargs pydantic-field ¤

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

StardistPostprocessingKwargs2D pydantic-model ¤

Bases: _StardistPostprocessingKwargsBase

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "prob_threshold": {
      "description": "The probability threshold for object candidate selection.",
      "title": "Prob Threshold",
      "type": "number"
    },
    "nms_threshold": {
      "description": "The IoU threshold for non-maximum suppression.",
      "title": "Nms Threshold",
      "type": "number"
    },
    "grid": {
      "description": "Grid size of network predictions.",
      "maxItems": 2,
      "minItems": 2,
      "prefixItems": [
        {
          "type": "integer"
        },
        {
          "type": "integer"
        }
      ],
      "title": "Grid",
      "type": "array"
    },
    "b": {
      "anyOf": [
        {
          "type": "integer"
        },
        {
          "maxItems": 2,
          "minItems": 2,
          "prefixItems": [
            {
              "maxItems": 2,
              "minItems": 2,
              "prefixItems": [
                {
                  "type": "integer"
                },
                {
                  "type": "integer"
                }
              ],
              "type": "array"
            },
            {
              "maxItems": 2,
              "minItems": 2,
              "prefixItems": [
                {
                  "type": "integer"
                },
                {
                  "type": "integer"
                }
              ],
              "type": "array"
            }
          ],
          "type": "array"
        }
      ],
      "description": "Border region in which object probability is set to zero.",
      "title": "B"
    }
  },
  "required": [
    "prob_threshold",
    "nms_threshold",
    "grid",
    "b"
  ],
  "title": "model.v0_5.StardistPostprocessingKwargs2D",
  "type": "object"
}

Fields:

b pydantic-field ¤

b: Union[int, Tuple[Tuple[int, int], Tuple[int, int]]]

Border region in which object probability is set to zero.

grid pydantic-field ¤

grid: Tuple[int, int]

Grid size of network predictions.

nms_threshold pydantic-field ¤

nms_threshold: float

The IoU threshold for non-maximum suppression.

prob_threshold pydantic-field ¤

prob_threshold: float

The probability threshold for object candidate selection.

__contains__ ¤

__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
459
460
def __contains__(self, item: str) -> bool:
    return item in self.__class__.model_fields

__getitem__ ¤

__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
453
454
455
456
457
def __getitem__(self, item: str) -> Any:
    if item in self.__class__.model_fields:
        return getattr(self, item)
    else:
        raise KeyError(item)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get ¤

get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
450
451
def get(self, item: str, default: Any = None) -> Any:
    return self[item] if item in self else default

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

StardistPostprocessingKwargs3D pydantic-model ¤

Bases: _StardistPostprocessingKwargsBase

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "prob_threshold": {
      "description": "The probability threshold for object candidate selection.",
      "title": "Prob Threshold",
      "type": "number"
    },
    "nms_threshold": {
      "description": "The IoU threshold for non-maximum suppression.",
      "title": "Nms Threshold",
      "type": "number"
    },
    "grid": {
      "description": "Grid size of network predictions.",
      "maxItems": 3,
      "minItems": 3,
      "prefixItems": [
        {
          "type": "integer"
        },
        {
          "type": "integer"
        },
        {
          "type": "integer"
        }
      ],
      "title": "Grid",
      "type": "array"
    },
    "b": {
      "anyOf": [
        {
          "type": "integer"
        },
        {
          "maxItems": 3,
          "minItems": 3,
          "prefixItems": [
            {
              "maxItems": 2,
              "minItems": 2,
              "prefixItems": [
                {
                  "type": "integer"
                },
                {
                  "type": "integer"
                }
              ],
              "type": "array"
            },
            {
              "maxItems": 2,
              "minItems": 2,
              "prefixItems": [
                {
                  "type": "integer"
                },
                {
                  "type": "integer"
                }
              ],
              "type": "array"
            },
            {
              "maxItems": 2,
              "minItems": 2,
              "prefixItems": [
                {
                  "type": "integer"
                },
                {
                  "type": "integer"
                }
              ],
              "type": "array"
            }
          ],
          "type": "array"
        }
      ],
      "description": "Border region in which object probability is set to zero.",
      "title": "B"
    },
    "n_rays": {
      "description": "Number of rays for 3D star-convex polyhedra.",
      "title": "N Rays",
      "type": "integer"
    },
    "anisotropy": {
      "description": "Anisotropy factors for 3D star-convex polyhedra, i.e. the physical pixel size along each spatial axis.",
      "maxItems": 3,
      "minItems": 3,
      "prefixItems": [
        {
          "type": "number"
        },
        {
          "type": "number"
        },
        {
          "type": "number"
        }
      ],
      "title": "Anisotropy",
      "type": "array"
    },
    "overlap_label": {
      "anyOf": [
        {
          "type": "integer"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Optional label to apply to any area of overlapping predicted objects.",
      "title": "Overlap Label"
    }
  },
  "required": [
    "prob_threshold",
    "nms_threshold",
    "grid",
    "b",
    "n_rays",
    "anisotropy"
  ],
  "title": "model.v0_5.StardistPostprocessingKwargs3D",
  "type": "object"
}

Fields:

anisotropy pydantic-field ¤

anisotropy: Tuple[float, float, float]

Anisotropy factors for 3D star-convex polyhedra, i.e. the physical pixel size along each spatial axis.

b pydantic-field ¤

b: Union[
    int,
    Tuple[
        Tuple[int, int], Tuple[int, int], Tuple[int, int]
    ],
]

Border region in which object probability is set to zero.

grid pydantic-field ¤

grid: Tuple[int, int, int]

Grid size of network predictions.

n_rays pydantic-field ¤

n_rays: int

Number of rays for 3D star-convex polyhedra.

nms_threshold pydantic-field ¤

nms_threshold: float

The IoU threshold for non-maximum suppression.

overlap_label pydantic-field ¤

overlap_label: Optional[int] = None

Optional label to apply to any area of overlapping predicted objects.

prob_threshold pydantic-field ¤

prob_threshold: float

The probability threshold for object candidate selection.

__contains__ ¤

__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
459
460
def __contains__(self, item: str) -> bool:
    return item in self.__class__.model_fields

__getitem__ ¤

__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
453
454
455
456
457
def __getitem__(self, item: str) -> Any:
    if item in self.__class__.model_fields:
        return getattr(self, item)
    else:
        raise KeyError(item)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get ¤

get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
450
451
def get(self, item: str, default: Any = None) -> Any:
    return self[item] if item in self else default

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

TensorDescrBase pydantic-model ¤

Bases: Node, Generic[IO_AxisT]

Show JSON schema:
{
  "$defs": {
    "BatchAxis": {
      "additionalProperties": false,
      "properties": {
        "id": {
          "default": "batch",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "batch",
          "title": "Type",
          "type": "string"
        },
        "size": {
          "anyOf": [
            {
              "const": 1,
              "type": "integer"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
          "title": "Size"
        }
      },
      "required": [
        "type"
      ],
      "title": "model.v0_5.BatchAxis",
      "type": "object"
    },
    "ChannelAxis": {
      "additionalProperties": false,
      "properties": {
        "id": {
          "default": "channel",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "channel",
          "title": "Type",
          "type": "string"
        },
        "channel_names": {
          "items": {
            "minLength": 1,
            "title": "Identifier",
            "type": "string"
          },
          "minItems": 1,
          "title": "Channel Names",
          "type": "array"
        }
      },
      "required": [
        "type",
        "channel_names"
      ],
      "title": "model.v0_5.ChannelAxis",
      "type": "object"
    },
    "DataDependentSize": {
      "additionalProperties": false,
      "properties": {
        "min": {
          "default": 1,
          "exclusiveMinimum": 0,
          "title": "Min",
          "type": "integer"
        },
        "max": {
          "anyOf": [
            {
              "exclusiveMinimum": 1,
              "type": "integer"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Max"
        }
      },
      "title": "model.v0_5.DataDependentSize",
      "type": "object"
    },
    "FileDescr": {
      "additionalProperties": false,
      "description": "A file description",
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "File source",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        }
      },
      "required": [
        "source"
      ],
      "title": "_internal.io.FileDescr",
      "type": "object"
    },
    "IndexInputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/ParameterizedSize"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
          "examples": [
            10,
            {
              "min": 32,
              "step": 16
            },
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "index",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "index",
          "title": "Type",
          "type": "string"
        },
        "concatenable": {
          "default": false,
          "description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
          "title": "Concatenable",
          "type": "boolean"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.IndexInputAxis",
      "type": "object"
    },
    "IndexOutputAxis": {
      "additionalProperties": false,
      "properties": {
        "id": {
          "default": "index",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "index",
          "title": "Type",
          "type": "string"
        },
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/SizeReference"
            },
            {
              "$ref": "#/$defs/DataDependentSize"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset ([SizeReference][])\n- data dependent size using [DataDependentSize][] (size is only known after model inference)",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        }
      },
      "required": [
        "type",
        "size"
      ],
      "title": "model.v0_5.IndexOutputAxis",
      "type": "object"
    },
    "IntervalOrRatioDataDescr": {
      "additionalProperties": false,
      "properties": {
        "type": {
          "default": "float32",
          "enum": [
            "float32",
            "float64",
            "uint8",
            "int8",
            "uint16",
            "int16",
            "uint32",
            "int32",
            "uint64",
            "int64"
          ],
          "examples": [
            "float32",
            "float64",
            "uint8",
            "uint16"
          ],
          "title": "Type",
          "type": "string"
        },
        "range": {
          "default": [
            null,
            null
          ],
          "description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
          "maxItems": 2,
          "minItems": 2,
          "prefixItems": [
            {
              "anyOf": [
                {
                  "type": "number"
                },
                {
                  "type": "null"
                }
              ]
            },
            {
              "anyOf": [
                {
                  "type": "number"
                },
                {
                  "type": "null"
                }
              ]
            }
          ],
          "title": "Range",
          "type": "array"
        },
        "unit": {
          "anyOf": [
            {
              "const": "arbitrary unit",
              "type": "string"
            },
            {
              "description": "An SI unit",
              "minLength": 1,
              "pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
              "title": "SiUnit",
              "type": "string"
            }
          ],
          "default": "arbitrary unit",
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "description": "Scale for data on an interval (or ratio) scale.",
          "title": "Scale",
          "type": "number"
        },
        "offset": {
          "anyOf": [
            {
              "type": "number"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Offset for data on a ratio scale.",
          "title": "Offset"
        }
      },
      "title": "model.v0_5.IntervalOrRatioDataDescr",
      "type": "object"
    },
    "NominalOrOrdinalDataDescr": {
      "additionalProperties": false,
      "properties": {
        "values": {
          "anyOf": [
            {
              "items": {
                "type": "integer"
              },
              "minItems": 1,
              "type": "array"
            },
            {
              "items": {
                "type": "number"
              },
              "minItems": 1,
              "type": "array"
            },
            {
              "items": {
                "type": "boolean"
              },
              "minItems": 1,
              "type": "array"
            },
            {
              "items": {
                "type": "string"
              },
              "minItems": 1,
              "type": "array"
            }
          ],
          "description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
          "title": "Values"
        },
        "type": {
          "default": "uint8",
          "enum": [
            "float32",
            "float64",
            "uint8",
            "int8",
            "uint16",
            "int16",
            "uint32",
            "int32",
            "uint64",
            "int64",
            "bool"
          ],
          "examples": [
            "float32",
            "uint8",
            "uint16",
            "int64",
            "bool"
          ],
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "const": "arbitrary unit",
              "type": "string"
            },
            {
              "description": "An SI unit",
              "minLength": 1,
              "pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
              "title": "SiUnit",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        }
      },
      "required": [
        "values"
      ],
      "title": "model.v0_5.NominalOrOrdinalDataDescr",
      "type": "object"
    },
    "ParameterizedSize": {
      "additionalProperties": false,
      "description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n  This allows to adjust the axis size more generically.",
      "properties": {
        "min": {
          "exclusiveMinimum": 0,
          "title": "Min",
          "type": "integer"
        },
        "step": {
          "exclusiveMinimum": 0,
          "title": "Step",
          "type": "integer"
        }
      },
      "required": [
        "min",
        "step"
      ],
      "title": "model.v0_5.ParameterizedSize",
      "type": "object"
    },
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    },
    "SizeReference": {
      "additionalProperties": false,
      "description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n    `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n...     id=AxisId(\"h\"),\n...     size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n...     unit=\"millimeter\",\n...     scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
      "properties": {
        "tensor_id": {
          "description": "tensor id of the reference axis",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axis_id": {
          "description": "axis id of the reference axis",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "offset": {
          "default": 0,
          "title": "Offset",
          "type": "integer"
        }
      },
      "required": [
        "tensor_id",
        "axis_id"
      ],
      "title": "model.v0_5.SizeReference",
      "type": "object"
    },
    "SpaceInputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/ParameterizedSize"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
          "examples": [
            10,
            {
              "min": 32,
              "step": 16
            },
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "x",
          "examples": [
            "x",
            "y",
            "z"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "space",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attometer",
                "angstrom",
                "centimeter",
                "decimeter",
                "exameter",
                "femtometer",
                "foot",
                "gigameter",
                "hectometer",
                "inch",
                "kilometer",
                "megameter",
                "meter",
                "micrometer",
                "mile",
                "millimeter",
                "nanometer",
                "parsec",
                "petameter",
                "picometer",
                "terameter",
                "yard",
                "yoctometer",
                "yottameter",
                "zeptometer",
                "zettameter"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        },
        "concatenable": {
          "default": false,
          "description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
          "title": "Concatenable",
          "type": "boolean"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.SpaceInputAxis",
      "type": "object"
    },
    "SpaceOutputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "x",
          "examples": [
            "x",
            "y",
            "z"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "space",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attometer",
                "angstrom",
                "centimeter",
                "decimeter",
                "exameter",
                "femtometer",
                "foot",
                "gigameter",
                "hectometer",
                "inch",
                "kilometer",
                "megameter",
                "meter",
                "micrometer",
                "mile",
                "millimeter",
                "nanometer",
                "parsec",
                "petameter",
                "picometer",
                "terameter",
                "yard",
                "yoctometer",
                "yottameter",
                "zeptometer",
                "zettameter"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.SpaceOutputAxis",
      "type": "object"
    },
    "SpaceOutputAxisWithHalo": {
      "additionalProperties": false,
      "properties": {
        "halo": {
          "description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
          "minimum": 1,
          "title": "Halo",
          "type": "integer"
        },
        "size": {
          "$ref": "#/$defs/SizeReference",
          "description": "reference to another axis with an optional offset (see [SizeReference][])",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ]
        },
        "id": {
          "default": "x",
          "examples": [
            "x",
            "y",
            "z"
          ],
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "space",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attometer",
                "angstrom",
                "centimeter",
                "decimeter",
                "exameter",
                "femtometer",
                "foot",
                "gigameter",
                "hectometer",
                "inch",
                "kilometer",
                "megameter",
                "meter",
                "micrometer",
                "mile",
                "millimeter",
                "nanometer",
                "parsec",
                "petameter",
                "picometer",
                "terameter",
                "yard",
                "yoctometer",
                "yottameter",
                "zeptometer",
                "zettameter"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        }
      },
      "required": [
        "halo",
        "size",
        "type"
      ],
      "title": "model.v0_5.SpaceOutputAxisWithHalo",
      "type": "object"
    },
    "TimeInputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/ParameterizedSize"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
          "examples": [
            10,
            {
              "min": 32,
              "step": 16
            },
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "time",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "time",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attosecond",
                "centisecond",
                "day",
                "decisecond",
                "exasecond",
                "femtosecond",
                "gigasecond",
                "hectosecond",
                "hour",
                "kilosecond",
                "megasecond",
                "microsecond",
                "millisecond",
                "minute",
                "nanosecond",
                "petasecond",
                "picosecond",
                "second",
                "terasecond",
                "yoctosecond",
                "yottasecond",
                "zeptosecond",
                "zettasecond"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        },
        "concatenable": {
          "default": false,
          "description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
          "title": "Concatenable",
          "type": "boolean"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.TimeInputAxis",
      "type": "object"
    },
    "TimeOutputAxis": {
      "additionalProperties": false,
      "properties": {
        "size": {
          "anyOf": [
            {
              "exclusiveMinimum": 0,
              "type": "integer"
            },
            {
              "$ref": "#/$defs/SizeReference"
            }
          ],
          "description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ],
          "title": "Size"
        },
        "id": {
          "default": "time",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "time",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attosecond",
                "centisecond",
                "day",
                "decisecond",
                "exasecond",
                "femtosecond",
                "gigasecond",
                "hectosecond",
                "hour",
                "kilosecond",
                "megasecond",
                "microsecond",
                "millisecond",
                "minute",
                "nanosecond",
                "petasecond",
                "picosecond",
                "second",
                "terasecond",
                "yoctosecond",
                "yottasecond",
                "zeptosecond",
                "zettasecond"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        }
      },
      "required": [
        "size",
        "type"
      ],
      "title": "model.v0_5.TimeOutputAxis",
      "type": "object"
    },
    "TimeOutputAxisWithHalo": {
      "additionalProperties": false,
      "properties": {
        "halo": {
          "description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
          "minimum": 1,
          "title": "Halo",
          "type": "integer"
        },
        "size": {
          "$ref": "#/$defs/SizeReference",
          "description": "reference to another axis with an optional offset (see [SizeReference][])",
          "examples": [
            10,
            {
              "axis_id": "a",
              "offset": 5,
              "tensor_id": "t"
            }
          ]
        },
        "id": {
          "default": "time",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "description": {
          "default": "",
          "description": "A short description of this axis beyond its type and id.",
          "maxLength": 128,
          "title": "Description",
          "type": "string"
        },
        "type": {
          "const": "time",
          "title": "Type",
          "type": "string"
        },
        "unit": {
          "anyOf": [
            {
              "enum": [
                "attosecond",
                "centisecond",
                "day",
                "decisecond",
                "exasecond",
                "femtosecond",
                "gigasecond",
                "hectosecond",
                "hour",
                "kilosecond",
                "megasecond",
                "microsecond",
                "millisecond",
                "minute",
                "nanosecond",
                "petasecond",
                "picosecond",
                "second",
                "terasecond",
                "yoctosecond",
                "yottasecond",
                "zeptosecond",
                "zettasecond"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Unit"
        },
        "scale": {
          "default": 1.0,
          "exclusiveMinimum": 0,
          "title": "Scale",
          "type": "number"
        }
      },
      "required": [
        "halo",
        "size",
        "type"
      ],
      "title": "model.v0_5.TimeOutputAxisWithHalo",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "properties": {
    "id": {
      "description": "Tensor id. No duplicates are allowed.",
      "maxLength": 32,
      "minLength": 1,
      "title": "TensorId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "free text description",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "axes": {
      "description": "tensor axes",
      "items": {
        "anyOf": [
          {
            "discriminator": {
              "mapping": {
                "batch": "#/$defs/BatchAxis",
                "channel": "#/$defs/ChannelAxis",
                "index": "#/$defs/IndexInputAxis",
                "space": "#/$defs/SpaceInputAxis",
                "time": "#/$defs/TimeInputAxis"
              },
              "propertyName": "type"
            },
            "oneOf": [
              {
                "$ref": "#/$defs/BatchAxis"
              },
              {
                "$ref": "#/$defs/ChannelAxis"
              },
              {
                "$ref": "#/$defs/IndexInputAxis"
              },
              {
                "$ref": "#/$defs/TimeInputAxis"
              },
              {
                "$ref": "#/$defs/SpaceInputAxis"
              }
            ]
          },
          {
            "discriminator": {
              "mapping": {
                "batch": "#/$defs/BatchAxis",
                "channel": "#/$defs/ChannelAxis",
                "index": "#/$defs/IndexOutputAxis",
                "space": {
                  "oneOf": [
                    {
                      "$ref": "#/$defs/SpaceOutputAxis"
                    },
                    {
                      "$ref": "#/$defs/SpaceOutputAxisWithHalo"
                    }
                  ]
                },
                "time": {
                  "oneOf": [
                    {
                      "$ref": "#/$defs/TimeOutputAxis"
                    },
                    {
                      "$ref": "#/$defs/TimeOutputAxisWithHalo"
                    }
                  ]
                }
              },
              "propertyName": "type"
            },
            "oneOf": [
              {
                "$ref": "#/$defs/BatchAxis"
              },
              {
                "$ref": "#/$defs/ChannelAxis"
              },
              {
                "$ref": "#/$defs/IndexOutputAxis"
              },
              {
                "oneOf": [
                  {
                    "$ref": "#/$defs/TimeOutputAxis"
                  },
                  {
                    "$ref": "#/$defs/TimeOutputAxisWithHalo"
                  }
                ]
              },
              {
                "oneOf": [
                  {
                    "$ref": "#/$defs/SpaceOutputAxis"
                  },
                  {
                    "$ref": "#/$defs/SpaceOutputAxisWithHalo"
                  }
                ]
              }
            ]
          }
        ]
      },
      "minItems": 1,
      "title": "Axes",
      "type": "array"
    },
    "test_tensor": {
      "anyOf": [
        {
          "$ref": "#/$defs/FileDescr"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
    },
    "sample_tensor": {
      "anyOf": [
        {
          "$ref": "#/$defs/FileDescr"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
    },
    "data": {
      "anyOf": [
        {
          "$ref": "#/$defs/NominalOrOrdinalDataDescr"
        },
        {
          "$ref": "#/$defs/IntervalOrRatioDataDescr"
        },
        {
          "items": {
            "anyOf": [
              {
                "$ref": "#/$defs/NominalOrOrdinalDataDescr"
              },
              {
                "$ref": "#/$defs/IntervalOrRatioDataDescr"
              }
            ]
          },
          "minItems": 1,
          "type": "array"
        }
      ],
      "default": {
        "type": "float32",
        "range": [
          null,
          null
        ],
        "unit": "arbitrary unit",
        "scale": 1.0,
        "offset": null
      },
      "description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
      "title": "Data"
    }
  },
  "required": [
    "id",
    "axes"
  ],
  "title": "model.v0_5.TensorDescrBase",
  "type": "object"
}

Fields:

Validators:

  • _validate_axesaxes
  • _validate_sample_tensor
  • _check_data_type_across_channelsdata
  • _check_data_matches_channelaxis

axes pydantic-field ¤

axes: NotEmpty[Sequence[IO_AxisT]]

tensor axes

data pydantic-field ¤

data: Union[
    TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]
]

Description of the tensor's data values, optionally per channel. If specified per channel, the data type needs to match across channels.

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

free text description

dtype property ¤

dtype: Literal[
    "float32",
    "float64",
    "uint8",
    "int8",
    "uint16",
    "int16",
    "uint32",
    "int32",
    "uint64",
    "int64",
    "bool",
]

dtype as specified under data.type or data[i].type

id pydantic-field ¤

Tensor id. No duplicates are allowed.

sample_tensor pydantic-field ¤

sample_tensor: FAIR[Optional[FileDescr_]] = None

A sample tensor to illustrate a possible input/output for the model, The sample image primarily serves to inform a human user about an example use case and is typically stored as .hdf5, .png or .tiff. It has to be readable by the imageio library (numpy's .npy format is not supported). The image dimensionality has to match the number of axes specified in this tensor description.

shape property ¤

shape

test_tensor pydantic-field ¤

test_tensor: FAIR[Optional[FileDescr_]] = None

An example tensor to use for testing. Using the model with the test input tensors is expected to yield the test output tensors. Each test tensor has be a an ndarray in the numpy.lib file format. The file extension must be '.npy'.

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get_axis_sizes_for_array ¤

get_axis_sizes_for_array(
    array: NDArray[Any],
) -> Dict[AxisId, int]
Source code in src/bioimageio/spec/model/v0_5.py
1834
1835
1836
1837
1838
1839
1840
def get_axis_sizes_for_array(self, array: NDArray[Any]) -> Dict[AxisId, int]:
    if len(array.shape) != len(self.axes):
        raise ValueError(
            f"Dimension mismatch: array shape {array.shape} (#{len(array.shape)})"
            + f" incompatible with {len(self.axes)} axes."
        )
    return {a.id: array.shape[i] for i, a in enumerate(self.axes)}

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

TensorId ¤

Bases: LowerCaseIdentifier


              flowchart TD
              bioimageio.spec.model.v0_5.TensorId[TensorId]
              bioimageio.spec._internal.types.LowerCaseIdentifier[LowerCaseIdentifier]
              bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]

                              bioimageio.spec._internal.types.LowerCaseIdentifier --> bioimageio.spec.model.v0_5.TensorId
                                bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.types.LowerCaseIdentifier
                



              click bioimageio.spec.model.v0_5.TensorId href "" "bioimageio.spec.model.v0_5.TensorId"
              click bioimageio.spec._internal.types.LowerCaseIdentifier href "" "bioimageio.spec._internal.types.LowerCaseIdentifier"
              click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
            

Methods:

Name Description
__get_pydantic_core_schema__
__get_pydantic_json_schema__
__new__

Attributes:

Name Type Description
root_model Type[RootModel[Any]]

the pydantic root model to validate the string

root_model class-attribute ¤

root_model: Type[RootModel[Any]] = RootModel[
    Annotated[LowerCaseIdentifierAnno, MaxLen(32)]
]

the pydantic root model to validate the string

__get_pydantic_core_schema__ classmethod ¤

__get_pydantic_core_schema__(
    source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29
30
31
32
33
@classmethod
def __get_pydantic_core_schema__(
    cls, source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema:
    return no_info_after_validator_function(cls, handler(str))

__get_pydantic_json_schema__ classmethod ¤

__get_pydantic_json_schema__(
    core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35
36
37
38
39
40
41
42
43
44
@classmethod
def __get_pydantic_json_schema__(
    cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue:
    json_schema = cls.root_model.model_json_schema(mode=handler.mode)
    json_schema["title"] = cls.__name__.strip("_")
    if cls.__doc__:
        json_schema["description"] = cls.__doc__

    return json_schema

__new__ ¤

__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19
20
21
22
23
def __new__(cls, object: object):
    _validated = cls.root_model.model_validate(object).root
    self = super().__new__(cls, _validated)
    self._validated = _validated
    return self._after_validator()

TensorflowJsWeightsDescr pydantic-model ¤

Bases: WeightsEntryDescrBase

Show JSON schema:
{
  "$defs": {
    "Author": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "title": "Name",
          "type": "string"
        },
        "github_user": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Github User"
        }
      },
      "required": [
        "name"
      ],
      "title": "generic.v0_3.Author",
      "type": "object"
    },
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    },
    "Version": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        }
      ],
      "description": "wraps a packaging.version.Version instance for validation in pydantic models",
      "title": "Version"
    }
  },
  "additionalProperties": false,
  "properties": {
    "source": {
      "anyOf": [
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "$ref": "#/$defs/RelativeFilePath"
        },
        {
          "format": "file-path",
          "title": "FilePath",
          "type": "string"
        }
      ],
      "description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
      "title": "Source"
    },
    "sha256": {
      "anyOf": [
        {
          "description": "A SHA-256 hash value",
          "maxLength": 64,
          "minLength": 64,
          "title": "Sha256",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "SHA256 hash value of the **source** file.",
      "title": "Sha256"
    },
    "authors": {
      "anyOf": [
        {
          "items": {
            "$ref": "#/$defs/Author"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
      "title": "Authors"
    },
    "parent": {
      "anyOf": [
        {
          "enum": [
            "keras_hdf5",
            "keras_v3",
            "onnx",
            "pytorch_state_dict",
            "tensorflow_js",
            "tensorflow_saved_model_bundle",
            "torchscript"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
      "examples": [
        "pytorch_state_dict"
      ],
      "title": "Parent"
    },
    "comment": {
      "default": "",
      "description": "A comment about this weights entry, for example how these weights were created.",
      "title": "Comment",
      "type": "string"
    },
    "tensorflow_version": {
      "$ref": "#/$defs/Version",
      "description": "Version of the TensorFlow library used."
    }
  },
  "required": [
    "source",
    "tensorflow_version"
  ],
  "title": "model.v0_5.TensorflowJsWeightsDescr",
  "type": "object"
}

Fields:

Validators:

  • _validate_sha256
  • _validate

authors pydantic-field ¤

authors: Optional[List[Author]] = None

Authors Either the person(s) that have trained this model resulting in the original weights file. (If this is the initial weights entry, i.e. it does not have a parent) Or the person(s) who have converted the weights to this weights format. (If this is a child weight, i.e. it has a parent field)

comment pydantic-field ¤

comment: str = ''

A comment about this weights entry, for example how these weights were created.

parent pydantic-field ¤

parent: Annotated[
    Optional[WeightsFormat],
    Field(examples=["pytorch_state_dict"]),
] = None

The source weights these weights were converted from. For example, if a model's weights were converted from the pytorch_state_dict format to torchscript, The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights. All weight entries except one (the initial set of weights resulting from training the model), need to have this field.

sha256 pydantic-field ¤

sha256: Optional[Sha256] = None

SHA256 hash value of the source file.

source pydantic-field ¤

source: Annotated[
    FileSource, AfterValidator(wo_special_file_name)
]

The multi-file weights. All required files/folders should be a zip archive.

suffix property ¤

suffix: str

tensorflow_version pydantic-field ¤

tensorflow_version: Version

Version of the TensorFlow library used.

type class-attribute ¤

type: WeightsFormat = 'tensorflow_js'

weights_format_name class-attribute ¤

weights_format_name: str = 'Tensorflow.js'

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

download ¤

download(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

alias for .get_reader

Source code in src/bioimageio/spec/_internal/io.py
319
320
321
322
323
324
325
326
327
def download(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """alias for `.get_reader`"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

get_reader ¤

get_reader(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

open the file source (download if needed)

Source code in src/bioimageio/spec/_internal/io.py
309
310
311
312
313
314
315
316
317
def get_reader(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """open the file source (download if needed)"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

validate_sha256 ¤

validate_sha256(force_recompute: bool = False) -> None

validate the sha256 hash value of the source file

Source code in src/bioimageio/spec/_internal/io.py
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def validate_sha256(self, force_recompute: bool = False) -> None:
    """validate the sha256 hash value of the **source** file"""
    context = get_validation_context()
    src_str = str(self.source)
    if force_recompute:
        actual_sha = None
    else:
        actual_sha = context.known_files.get(src_str)

    if actual_sha is None:
        if context.perform_io_checks or force_recompute:
            reader = get_reader(self.source, sha256=self.sha256)
            if force_recompute:
                actual_sha = get_sha256(reader)
            else:
                actual_sha = reader.sha256

            context.known_files[src_str] = actual_sha
        elif context.known_files and src_str not in context.known_files:
            # perform_io_checks is False, but known files were given,
            # so we expect all file references to be in there
            raise ValueError(f"File {src_str} not found in `known_files`.")

    if actual_sha is None or self.sha256 == actual_sha:
        return
    elif self.sha256 is None or context.update_hashes:
        self.sha256 = actual_sha
    elif self.sha256 != actual_sha:
        raise ValueError(
            f"Sha256 mismatch for {self.source}. Expected {self.sha256}, got "
            + f"{actual_sha}. Update expected `sha256` or point to the matching "
            + "file."
        )

TensorflowSavedModelBundleWeightsDescr pydantic-model ¤

Bases: WeightsEntryDescrBase

Show JSON schema:
{
  "$defs": {
    "Author": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "title": "Name",
          "type": "string"
        },
        "github_user": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Github User"
        }
      },
      "required": [
        "name"
      ],
      "title": "generic.v0_3.Author",
      "type": "object"
    },
    "FileDescr": {
      "additionalProperties": false,
      "description": "A file description",
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "File source",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        }
      },
      "required": [
        "source"
      ],
      "title": "_internal.io.FileDescr",
      "type": "object"
    },
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    },
    "Version": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        }
      ],
      "description": "wraps a packaging.version.Version instance for validation in pydantic models",
      "title": "Version"
    }
  },
  "additionalProperties": false,
  "properties": {
    "source": {
      "anyOf": [
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "$ref": "#/$defs/RelativeFilePath"
        },
        {
          "format": "file-path",
          "title": "FilePath",
          "type": "string"
        }
      ],
      "description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
      "title": "Source"
    },
    "sha256": {
      "anyOf": [
        {
          "description": "A SHA-256 hash value",
          "maxLength": 64,
          "minLength": 64,
          "title": "Sha256",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "SHA256 hash value of the **source** file.",
      "title": "Sha256"
    },
    "authors": {
      "anyOf": [
        {
          "items": {
            "$ref": "#/$defs/Author"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
      "title": "Authors"
    },
    "parent": {
      "anyOf": [
        {
          "enum": [
            "keras_hdf5",
            "keras_v3",
            "onnx",
            "pytorch_state_dict",
            "tensorflow_js",
            "tensorflow_saved_model_bundle",
            "torchscript"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
      "examples": [
        "pytorch_state_dict"
      ],
      "title": "Parent"
    },
    "comment": {
      "default": "",
      "description": "A comment about this weights entry, for example how these weights were created.",
      "title": "Comment",
      "type": "string"
    },
    "tensorflow_version": {
      "$ref": "#/$defs/Version",
      "description": "Version of the TensorFlow library used."
    },
    "dependencies": {
      "anyOf": [
        {
          "$ref": "#/$defs/FileDescr",
          "examples": [
            {
              "source": "environment.yaml"
            }
          ]
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Custom dependencies beyond tensorflow.\nShould include tensorflow and any version pinning has to be compatible with **tensorflow_version**."
    }
  },
  "required": [
    "source",
    "tensorflow_version"
  ],
  "title": "model.v0_5.TensorflowSavedModelBundleWeightsDescr",
  "type": "object"
}

Fields:

Validators:

  • _validate_sha256
  • _validate

authors pydantic-field ¤

authors: Optional[List[Author]] = None

Authors Either the person(s) that have trained this model resulting in the original weights file. (If this is the initial weights entry, i.e. it does not have a parent) Or the person(s) who have converted the weights to this weights format. (If this is a child weight, i.e. it has a parent field)

comment pydantic-field ¤

comment: str = ''

A comment about this weights entry, for example how these weights were created.

dependencies pydantic-field ¤

dependencies: Optional[FileDescr_dependencies] = None

Custom dependencies beyond tensorflow. Should include tensorflow and any version pinning has to be compatible with tensorflow_version.

parent pydantic-field ¤

parent: Annotated[
    Optional[WeightsFormat],
    Field(examples=["pytorch_state_dict"]),
] = None

The source weights these weights were converted from. For example, if a model's weights were converted from the pytorch_state_dict format to torchscript, The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights. All weight entries except one (the initial set of weights resulting from training the model), need to have this field.

sha256 pydantic-field ¤

sha256: Optional[Sha256] = None

SHA256 hash value of the source file.

source pydantic-field ¤

source: Annotated[
    FileSource, AfterValidator(wo_special_file_name)
]

The multi-file weights. All required files/folders should be a zip archive.

suffix property ¤

suffix: str

tensorflow_version pydantic-field ¤

tensorflow_version: Version

Version of the TensorFlow library used.

type class-attribute ¤

type: WeightsFormat = 'tensorflow_saved_model_bundle'

weights_format_name class-attribute ¤

weights_format_name: str = 'Tensorflow Saved Model'

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

download ¤

download(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

alias for .get_reader

Source code in src/bioimageio/spec/_internal/io.py
319
320
321
322
323
324
325
326
327
def download(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """alias for `.get_reader`"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

get_reader ¤

get_reader(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

open the file source (download if needed)

Source code in src/bioimageio/spec/_internal/io.py
309
310
311
312
313
314
315
316
317
def get_reader(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """open the file source (download if needed)"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

validate_sha256 ¤

validate_sha256(force_recompute: bool = False) -> None

validate the sha256 hash value of the source file

Source code in src/bioimageio/spec/_internal/io.py
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def validate_sha256(self, force_recompute: bool = False) -> None:
    """validate the sha256 hash value of the **source** file"""
    context = get_validation_context()
    src_str = str(self.source)
    if force_recompute:
        actual_sha = None
    else:
        actual_sha = context.known_files.get(src_str)

    if actual_sha is None:
        if context.perform_io_checks or force_recompute:
            reader = get_reader(self.source, sha256=self.sha256)
            if force_recompute:
                actual_sha = get_sha256(reader)
            else:
                actual_sha = reader.sha256

            context.known_files[src_str] = actual_sha
        elif context.known_files and src_str not in context.known_files:
            # perform_io_checks is False, but known files were given,
            # so we expect all file references to be in there
            raise ValueError(f"File {src_str} not found in `known_files`.")

    if actual_sha is None or self.sha256 == actual_sha:
        return
    elif self.sha256 is None or context.update_hashes:
        self.sha256 = actual_sha
    elif self.sha256 != actual_sha:
        raise ValueError(
            f"Sha256 mismatch for {self.source}. Expected {self.sha256}, got "
            + f"{actual_sha}. Update expected `sha256` or point to the matching "
            + "file."
        )

TimeAxisBase pydantic-model ¤

Bases: AxisBase

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "id": {
      "default": "time",
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A short description of this axis beyond its type and id.",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "type": {
      "const": "time",
      "title": "Type",
      "type": "string"
    },
    "unit": {
      "anyOf": [
        {
          "enum": [
            "attosecond",
            "centisecond",
            "day",
            "decisecond",
            "exasecond",
            "femtosecond",
            "gigasecond",
            "hectosecond",
            "hour",
            "kilosecond",
            "megasecond",
            "microsecond",
            "millisecond",
            "minute",
            "nanosecond",
            "petasecond",
            "picosecond",
            "second",
            "terasecond",
            "yoctosecond",
            "yottasecond",
            "zeptosecond",
            "zettasecond"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Unit"
    },
    "scale": {
      "default": 1.0,
      "exclusiveMinimum": 0,
      "title": "Scale",
      "type": "number"
    }
  },
  "required": [
    "type"
  ],
  "title": "model.v0_5.TimeAxisBase",
  "type": "object"
}

Fields:

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

A short description of this axis beyond its type and id.

id pydantic-field ¤

An axis id unique across all axes of one tensor.

implemented_type class-attribute ¤

implemented_type: Literal['time'] = 'time'

scale pydantic-field ¤

scale: Annotated[float, Gt(0)] = 1.0

type pydantic-field ¤

type: Literal['time'] = 'time'

unit pydantic-field ¤

unit: Optional[TimeUnit] = None

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

TimeInputAxis pydantic-model ¤

Bases: TimeAxisBase, _WithInputAxisSize

Show JSON schema:
{
  "$defs": {
    "ParameterizedSize": {
      "additionalProperties": false,
      "description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n  This allows to adjust the axis size more generically.",
      "properties": {
        "min": {
          "exclusiveMinimum": 0,
          "title": "Min",
          "type": "integer"
        },
        "step": {
          "exclusiveMinimum": 0,
          "title": "Step",
          "type": "integer"
        }
      },
      "required": [
        "min",
        "step"
      ],
      "title": "model.v0_5.ParameterizedSize",
      "type": "object"
    },
    "SizeReference": {
      "additionalProperties": false,
      "description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n    `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n...     id=AxisId(\"h\"),\n...     size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n...     unit=\"millimeter\",\n...     scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
      "properties": {
        "tensor_id": {
          "description": "tensor id of the reference axis",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axis_id": {
          "description": "axis id of the reference axis",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "offset": {
          "default": 0,
          "title": "Offset",
          "type": "integer"
        }
      },
      "required": [
        "tensor_id",
        "axis_id"
      ],
      "title": "model.v0_5.SizeReference",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "properties": {
    "size": {
      "anyOf": [
        {
          "exclusiveMinimum": 0,
          "type": "integer"
        },
        {
          "$ref": "#/$defs/ParameterizedSize"
        },
        {
          "$ref": "#/$defs/SizeReference"
        }
      ],
      "description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
      "examples": [
        10,
        {
          "min": 32,
          "step": 16
        },
        {
          "axis_id": "a",
          "offset": 5,
          "tensor_id": "t"
        }
      ],
      "title": "Size"
    },
    "id": {
      "default": "time",
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A short description of this axis beyond its type and id.",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "type": {
      "const": "time",
      "title": "Type",
      "type": "string"
    },
    "unit": {
      "anyOf": [
        {
          "enum": [
            "attosecond",
            "centisecond",
            "day",
            "decisecond",
            "exasecond",
            "femtosecond",
            "gigasecond",
            "hectosecond",
            "hour",
            "kilosecond",
            "megasecond",
            "microsecond",
            "millisecond",
            "minute",
            "nanosecond",
            "petasecond",
            "picosecond",
            "second",
            "terasecond",
            "yoctosecond",
            "yottasecond",
            "zeptosecond",
            "zettasecond"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Unit"
    },
    "scale": {
      "default": 1.0,
      "exclusiveMinimum": 0,
      "title": "Scale",
      "type": "number"
    },
    "concatenable": {
      "default": false,
      "description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
      "title": "Concatenable",
      "type": "boolean"
    }
  },
  "required": [
    "size",
    "type"
  ],
  "title": "model.v0_5.TimeInputAxis",
  "type": "object"
}

Fields:

concatenable pydantic-field ¤

concatenable: bool = False

If a model has a concatenable input axis, it can be processed blockwise, splitting a longer sample axis into blocks matching its input tensor description. Output axes are concatenable if they have a SizeReference to a concatenable input axis.

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

A short description of this axis beyond its type and id.

id pydantic-field ¤

An axis id unique across all axes of one tensor.

implemented_type class-attribute ¤

implemented_type: Literal['time'] = 'time'

scale pydantic-field ¤

scale: Annotated[float, Gt(0)] = 1.0

size pydantic-field ¤

size: Annotated[
    Union[
        Annotated[int, Gt(0)],
        ParameterizedSize,
        SizeReference,
    ],
    Field(
        examples=[
            10,
            ParameterizedSize(min=32, step=16).model_dump(
                mode="json"
            ),
            SizeReference(
                tensor_id=TensorId("t"),
                axis_id=AxisId("a"),
                offset=5,
            ).model_dump(mode="json"),
        ]
    ),
]

The size/length of this axis can be specified as - fixed integer - parameterized series of valid sizes (ParameterizedSize) - reference to another axis with an optional offset (SizeReference)

type pydantic-field ¤

type: Literal['time'] = 'time'

unit pydantic-field ¤

unit: Optional[TimeUnit] = None

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

TimeOutputAxis pydantic-model ¤

Bases: TimeAxisBase, _WithOutputAxisSize

Show JSON schema:
{
  "$defs": {
    "SizeReference": {
      "additionalProperties": false,
      "description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n    `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n...     id=AxisId(\"h\"),\n...     size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n...     unit=\"millimeter\",\n...     scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
      "properties": {
        "tensor_id": {
          "description": "tensor id of the reference axis",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axis_id": {
          "description": "axis id of the reference axis",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "offset": {
          "default": 0,
          "title": "Offset",
          "type": "integer"
        }
      },
      "required": [
        "tensor_id",
        "axis_id"
      ],
      "title": "model.v0_5.SizeReference",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "properties": {
    "size": {
      "anyOf": [
        {
          "exclusiveMinimum": 0,
          "type": "integer"
        },
        {
          "$ref": "#/$defs/SizeReference"
        }
      ],
      "description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
      "examples": [
        10,
        {
          "axis_id": "a",
          "offset": 5,
          "tensor_id": "t"
        }
      ],
      "title": "Size"
    },
    "id": {
      "default": "time",
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A short description of this axis beyond its type and id.",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "type": {
      "const": "time",
      "title": "Type",
      "type": "string"
    },
    "unit": {
      "anyOf": [
        {
          "enum": [
            "attosecond",
            "centisecond",
            "day",
            "decisecond",
            "exasecond",
            "femtosecond",
            "gigasecond",
            "hectosecond",
            "hour",
            "kilosecond",
            "megasecond",
            "microsecond",
            "millisecond",
            "minute",
            "nanosecond",
            "petasecond",
            "picosecond",
            "second",
            "terasecond",
            "yoctosecond",
            "yottasecond",
            "zeptosecond",
            "zettasecond"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Unit"
    },
    "scale": {
      "default": 1.0,
      "exclusiveMinimum": 0,
      "title": "Scale",
      "type": "number"
    }
  },
  "required": [
    "size",
    "type"
  ],
  "title": "model.v0_5.TimeOutputAxis",
  "type": "object"
}

Fields:

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

A short description of this axis beyond its type and id.

id pydantic-field ¤

An axis id unique across all axes of one tensor.

implemented_type class-attribute ¤

implemented_type: Literal['time'] = 'time'

scale pydantic-field ¤

scale: Annotated[float, Gt(0)] = 1.0

size pydantic-field ¤

size: Annotated[
    Union[Annotated[int, Gt(0)], SizeReference],
    Field(
        examples=[
            10,
            SizeReference(
                tensor_id=TensorId("t"),
                axis_id=AxisId("a"),
                offset=5,
            ).model_dump(mode="json"),
        ]
    ),
]

The size/length of this axis can be specified as - fixed integer - reference to another axis with an optional offset (see SizeReference)

type pydantic-field ¤

type: Literal['time'] = 'time'

unit pydantic-field ¤

unit: Optional[TimeUnit] = None

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

TimeOutputAxisWithHalo pydantic-model ¤

Bases: TimeAxisBase, WithHalo

Show JSON schema:
{
  "$defs": {
    "SizeReference": {
      "additionalProperties": false,
      "description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n    `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n...     id=AxisId(\"h\"),\n...     size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n...     unit=\"millimeter\",\n...     scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
      "properties": {
        "tensor_id": {
          "description": "tensor id of the reference axis",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axis_id": {
          "description": "axis id of the reference axis",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "offset": {
          "default": 0,
          "title": "Offset",
          "type": "integer"
        }
      },
      "required": [
        "tensor_id",
        "axis_id"
      ],
      "title": "model.v0_5.SizeReference",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "properties": {
    "halo": {
      "description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
      "minimum": 1,
      "title": "Halo",
      "type": "integer"
    },
    "size": {
      "$ref": "#/$defs/SizeReference",
      "description": "reference to another axis with an optional offset (see [SizeReference][])",
      "examples": [
        10,
        {
          "axis_id": "a",
          "offset": 5,
          "tensor_id": "t"
        }
      ]
    },
    "id": {
      "default": "time",
      "maxLength": 16,
      "minLength": 1,
      "title": "AxisId",
      "type": "string"
    },
    "description": {
      "default": "",
      "description": "A short description of this axis beyond its type and id.",
      "maxLength": 128,
      "title": "Description",
      "type": "string"
    },
    "type": {
      "const": "time",
      "title": "Type",
      "type": "string"
    },
    "unit": {
      "anyOf": [
        {
          "enum": [
            "attosecond",
            "centisecond",
            "day",
            "decisecond",
            "exasecond",
            "femtosecond",
            "gigasecond",
            "hectosecond",
            "hour",
            "kilosecond",
            "megasecond",
            "microsecond",
            "millisecond",
            "minute",
            "nanosecond",
            "petasecond",
            "picosecond",
            "second",
            "terasecond",
            "yoctosecond",
            "yottasecond",
            "zeptosecond",
            "zettasecond"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Unit"
    },
    "scale": {
      "default": 1.0,
      "exclusiveMinimum": 0,
      "title": "Scale",
      "type": "number"
    }
  },
  "required": [
    "halo",
    "size",
    "type"
  ],
  "title": "model.v0_5.TimeOutputAxisWithHalo",
  "type": "object"
}

Fields:

description pydantic-field ¤

description: Annotated[str, MaxLen(128)] = ''

A short description of this axis beyond its type and id.

halo pydantic-field ¤

halo: Annotated[int, Ge(1)]

The halo should be cropped from the output tensor to avoid boundary effects. It is to be cropped from both sides, i.e. size_after_crop = size - 2 * halo. To document a halo that is already cropped by the model use size.offset instead.

id pydantic-field ¤

An axis id unique across all axes of one tensor.

implemented_type class-attribute ¤

implemented_type: Literal['time'] = 'time'

scale pydantic-field ¤

scale: Annotated[float, Gt(0)] = 1.0

size pydantic-field ¤

size: Annotated[
    SizeReference,
    Field(
        examples=[
            10,
            SizeReference(
                tensor_id=TensorId("t"),
                axis_id=AxisId("a"),
                offset=5,
            ).model_dump(mode="json"),
        ]
    ),
]

reference to another axis with an optional offset (see SizeReference)

type pydantic-field ¤

type: Literal['time'] = 'time'

unit pydantic-field ¤

unit: Optional[TimeUnit] = None

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

TorchscriptWeightsDescr pydantic-model ¤

Bases: WeightsEntryDescrBase

Show JSON schema:
{
  "$defs": {
    "Author": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "title": "Name",
          "type": "string"
        },
        "github_user": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Github User"
        }
      },
      "required": [
        "name"
      ],
      "title": "generic.v0_3.Author",
      "type": "object"
    },
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    },
    "Version": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        }
      ],
      "description": "wraps a packaging.version.Version instance for validation in pydantic models",
      "title": "Version"
    }
  },
  "additionalProperties": false,
  "properties": {
    "source": {
      "anyOf": [
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "$ref": "#/$defs/RelativeFilePath"
        },
        {
          "format": "file-path",
          "title": "FilePath",
          "type": "string"
        }
      ],
      "description": "Source of the weights file.",
      "title": "Source"
    },
    "sha256": {
      "anyOf": [
        {
          "description": "A SHA-256 hash value",
          "maxLength": 64,
          "minLength": 64,
          "title": "Sha256",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "SHA256 hash value of the **source** file.",
      "title": "Sha256"
    },
    "authors": {
      "anyOf": [
        {
          "items": {
            "$ref": "#/$defs/Author"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
      "title": "Authors"
    },
    "parent": {
      "anyOf": [
        {
          "enum": [
            "keras_hdf5",
            "keras_v3",
            "onnx",
            "pytorch_state_dict",
            "tensorflow_js",
            "tensorflow_saved_model_bundle",
            "torchscript"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
      "examples": [
        "pytorch_state_dict"
      ],
      "title": "Parent"
    },
    "comment": {
      "default": "",
      "description": "A comment about this weights entry, for example how these weights were created.",
      "title": "Comment",
      "type": "string"
    },
    "pytorch_version": {
      "$ref": "#/$defs/Version",
      "description": "Version of the PyTorch library used."
    }
  },
  "required": [
    "source",
    "pytorch_version"
  ],
  "title": "model.v0_5.TorchscriptWeightsDescr",
  "type": "object"
}

Fields:

Validators:

  • _validate_sha256
  • _validate

authors pydantic-field ¤

authors: Optional[List[Author]] = None

Authors Either the person(s) that have trained this model resulting in the original weights file. (If this is the initial weights entry, i.e. it does not have a parent) Or the person(s) who have converted the weights to this weights format. (If this is a child weight, i.e. it has a parent field)

comment pydantic-field ¤

comment: str = ''

A comment about this weights entry, for example how these weights were created.

parent pydantic-field ¤

parent: Annotated[
    Optional[WeightsFormat],
    Field(examples=["pytorch_state_dict"]),
] = None

The source weights these weights were converted from. For example, if a model's weights were converted from the pytorch_state_dict format to torchscript, The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights. All weight entries except one (the initial set of weights resulting from training the model), need to have this field.

pytorch_version pydantic-field ¤

pytorch_version: Version

Version of the PyTorch library used.

sha256 pydantic-field ¤

sha256: Optional[Sha256] = None

SHA256 hash value of the source file.

source pydantic-field ¤

source: Annotated[
    FileSource, AfterValidator(wo_special_file_name)
]

Source of the weights file.

suffix property ¤

suffix: str

type class-attribute ¤

type: WeightsFormat = 'torchscript'

weights_format_name class-attribute ¤

weights_format_name: str = 'TorchScript'

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

download ¤

download(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

alias for .get_reader

Source code in src/bioimageio/spec/_internal/io.py
319
320
321
322
323
324
325
326
327
def download(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """alias for `.get_reader`"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

get_reader ¤

get_reader(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

open the file source (download if needed)

Source code in src/bioimageio/spec/_internal/io.py
309
310
311
312
313
314
315
316
317
def get_reader(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """open the file source (download if needed)"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

validate_sha256 ¤

validate_sha256(force_recompute: bool = False) -> None

validate the sha256 hash value of the source file

Source code in src/bioimageio/spec/_internal/io.py
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def validate_sha256(self, force_recompute: bool = False) -> None:
    """validate the sha256 hash value of the **source** file"""
    context = get_validation_context()
    src_str = str(self.source)
    if force_recompute:
        actual_sha = None
    else:
        actual_sha = context.known_files.get(src_str)

    if actual_sha is None:
        if context.perform_io_checks or force_recompute:
            reader = get_reader(self.source, sha256=self.sha256)
            if force_recompute:
                actual_sha = get_sha256(reader)
            else:
                actual_sha = reader.sha256

            context.known_files[src_str] = actual_sha
        elif context.known_files and src_str not in context.known_files:
            # perform_io_checks is False, but known files were given,
            # so we expect all file references to be in there
            raise ValueError(f"File {src_str} not found in `known_files`.")

    if actual_sha is None or self.sha256 == actual_sha:
        return
    elif self.sha256 is None or context.update_hashes:
        self.sha256 = actual_sha
    elif self.sha256 != actual_sha:
        raise ValueError(
            f"Sha256 mismatch for {self.source}. Expected {self.sha256}, got "
            + f"{actual_sha}. Update expected `sha256` or point to the matching "
            + "file."
        )

TrainingDetails pydantic-model ¤

Bases: Node

Show JSON schema:
{
  "$defs": {
    "YamlValue": {
      "anyOf": [
        {
          "type": "boolean"
        },
        {
          "format": "date",
          "type": "string"
        },
        {
          "format": "date-time",
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        },
        {
          "type": "string"
        },
        {
          "items": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "array"
        },
        {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "object"
        },
        {
          "type": "null"
        }
      ]
    }
  },
  "additionalProperties": true,
  "properties": {
    "training_preprocessing": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Detailed image preprocessing steps during model training:\n\nMention:\n- *Normalization methods*\n- *Augmentation strategies*\n- *Resizing/resampling procedures*\n- *Artifact handling*",
      "title": "Training Preprocessing"
    },
    "training_epochs": {
      "anyOf": [
        {
          "type": "number"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Number of training epochs.",
      "title": "Training Epochs"
    },
    "training_batch_size": {
      "anyOf": [
        {
          "type": "number"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Batch size used in training.",
      "title": "Training Batch Size"
    },
    "initial_learning_rate": {
      "anyOf": [
        {
          "type": "number"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Initial learning rate used in training.",
      "title": "Initial Learning Rate"
    },
    "learning_rate_schedule": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Learning rate schedule used in training.",
      "title": "Learning Rate Schedule"
    },
    "loss_function": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Loss function used in training, e.g. nn.MSELoss.",
      "title": "Loss Function"
    },
    "loss_function_kwargs": {
      "additionalProperties": {
        "$ref": "#/$defs/YamlValue"
      },
      "description": "key word arguments for the `loss_function`",
      "title": "Loss Function Kwargs",
      "type": "object"
    },
    "optimizer": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "optimizer, e.g. torch.optim.Adam",
      "title": "Optimizer"
    },
    "optimizer_kwargs": {
      "additionalProperties": {
        "$ref": "#/$defs/YamlValue"
      },
      "description": "key word arguments for the `optimizer`",
      "title": "Optimizer Kwargs",
      "type": "object"
    },
    "regularization": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Regularization techniques used during training, e.g. drop-out or weight decay.",
      "title": "Regularization"
    },
    "training_duration": {
      "anyOf": [
        {
          "type": "number"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Total training duration in hours.",
      "title": "Training Duration"
    }
  },
  "title": "model.v0_5.TrainingDetails",
  "type": "object"
}

Fields:

initial_learning_rate pydantic-field ¤

initial_learning_rate: Optional[float] = None

Initial learning rate used in training.

learning_rate_schedule pydantic-field ¤

learning_rate_schedule: Optional[str] = None

Learning rate schedule used in training.

loss_function pydantic-field ¤

loss_function: Optional[str] = None

Loss function used in training, e.g. nn.MSELoss.

loss_function_kwargs pydantic-field ¤

loss_function_kwargs: Dict[str, YamlValue]

key word arguments for the loss_function

optimizer pydantic-field ¤

optimizer: Optional[str] = None

optimizer, e.g. torch.optim.Adam

optimizer_kwargs pydantic-field ¤

optimizer_kwargs: Dict[str, YamlValue]

key word arguments for the optimizer

regularization pydantic-field ¤

regularization: Optional[str] = None

Regularization techniques used during training, e.g. drop-out or weight decay.

training_batch_size pydantic-field ¤

training_batch_size: Optional[float] = None

Batch size used in training.

training_duration pydantic-field ¤

training_duration: Optional[float] = None

Total training duration in hours.

training_epochs pydantic-field ¤

training_epochs: Optional[float] = None

Number of training epochs.

training_preprocessing pydantic-field ¤

training_preprocessing: Optional[str] = None

Detailed image preprocessing steps during model training:

Mention: - Normalization methods - Augmentation strategies - Resizing/resampling procedures - Artifact handling

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

Uploader pydantic-model ¤

Bases: Node

Show JSON schema:
{
  "additionalProperties": false,
  "properties": {
    "email": {
      "description": "Email",
      "format": "email",
      "title": "Email",
      "type": "string"
    },
    "name": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "name",
      "title": "Name"
    }
  },
  "required": [
    "email"
  ],
  "title": "generic.v0_2.Uploader",
  "type": "object"
}

Fields:

  • email (EmailStr)
  • name (Optional[Annotated[str, AfterValidator(_remove_slashes)]])

email pydantic-field ¤

email: EmailStr

Email

name pydantic-field ¤

name: Optional[
    Annotated[str, AfterValidator(_remove_slashes)]
] = None

name

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

Version ¤

Bases: RootModel[Union[str, int, float]]


              flowchart TD
              bioimageio.spec.model.v0_5.Version[Version]

              

              click bioimageio.spec.model.v0_5.Version href "" "bioimageio.spec.model.v0_5.Version"
            

wraps a packaging.version.Version instance for validation in pydantic models

Methods:

Name Description
__eq__
__ge__
__le__
__lt__
__str__
model_post_init

set _version attribute @private

Attributes:

Name Type Description
base_version str

The "base version" of the version.

dev Optional[int]

The development number of the version.

epoch int

The epoch of the version.

is_devrelease bool

Whether this version is a development release.

is_postrelease bool

Whether this version is a post-release.

is_prerelease bool

Whether this version is a pre-release.

local Optional[str]

The local version segment of the version.

major int

The first item of :attr:release or 0 if unavailable.

micro int

The third item of :attr:release or 0 if unavailable.

minor int

The second item of :attr:release or 0 if unavailable.

post Optional[int]

The post-release number of the version.

pre Optional[Tuple[str, int]]

The pre-release segment of the version.

public str

The public portion of the version.

release Tuple[int, ...]

The components of the "release" segment of the version.

base_version property ¤

base_version: str

The "base version" of the version.

>>> Version("1.2.3").base_version
'1.2.3'
>>> Version("1.2.3+abc").base_version
'1.2.3'
>>> Version("1!1.2.3+abc.dev1").base_version
'1!1.2.3'

The "base version" is the public version of the project without any pre or post release markers.

dev property ¤

dev: Optional[int]

The development number of the version.

>>> print(Version("1.2.3").dev)
None
>>> Version("1.2.3.dev1").dev
1

epoch property ¤

epoch: int

The epoch of the version.

>>> Version("2.0.0").epoch
0
>>> Version("1!2.0.0").epoch
1

is_devrelease property ¤

is_devrelease: bool

Whether this version is a development release.

>>> Version("1.2.3").is_devrelease
False
>>> Version("1.2.3.dev1").is_devrelease
True

is_postrelease property ¤

is_postrelease: bool

Whether this version is a post-release.

>>> Version("1.2.3").is_postrelease
False
>>> Version("1.2.3.post1").is_postrelease
True

is_prerelease property ¤

is_prerelease: bool

Whether this version is a pre-release.

>>> Version("1.2.3").is_prerelease
False
>>> Version("1.2.3a1").is_prerelease
True
>>> Version("1.2.3b1").is_prerelease
True
>>> Version("1.2.3rc1").is_prerelease
True
>>> Version("1.2.3dev1").is_prerelease
True

local property ¤

local: Optional[str]

The local version segment of the version.

>>> print(Version("1.2.3").local)
None
>>> Version("1.2.3+abc").local
'abc'

major property ¤

major: int

The first item of :attr:release or 0 if unavailable.

>>> Version("1.2.3").major
1

micro property ¤

micro: int

The third item of :attr:release or 0 if unavailable.

>>> Version("1.2.3").micro
3
>>> Version("1").micro
0

minor property ¤

minor: int

The second item of :attr:release or 0 if unavailable.

>>> Version("1.2.3").minor
2
>>> Version("1").minor
0

post property ¤

post: Optional[int]

The post-release number of the version.

>>> print(Version("1.2.3").post)
None
>>> Version("1.2.3.post1").post
1

pre property ¤

pre: Optional[Tuple[str, int]]

The pre-release segment of the version.

>>> print(Version("1.2.3").pre)
None
>>> Version("1.2.3a1").pre
('a', 1)
>>> Version("1.2.3b1").pre
('b', 1)
>>> Version("1.2.3rc1").pre
('rc', 1)

public property ¤

public: str

The public portion of the version.

>>> Version("1.2.3").public
'1.2.3'
>>> Version("1.2.3+abc").public
'1.2.3'
>>> Version("1.2.3+abc.dev1").public
'1.2.3'

release property ¤

release: Tuple[int, ...]

The components of the "release" segment of the version.

>>> Version("1.2.3").release
(1, 2, 3)
>>> Version("2.0.0").release
(2, 0, 0)
>>> Version("1!2.0.0.post0").release
(2, 0, 0)

Includes trailing zeroes but not the epoch or any pre-release / development / post-release suffixes.

__eq__ ¤

__eq__(other: Any)
Source code in src/bioimageio/spec/_internal/version_type.py
28
29
30
31
def __eq__(self, other: Any):
    if not isinstance(other, Version):
        return NotImplemented
    return self._version == other._version

__ge__ ¤

__ge__(other: Any)
Source code in src/bioimageio/spec/_internal/version_type.py
33
34
35
36
def __ge__(self, other: Any):
    if not isinstance(other, Version):
        return NotImplemented
    return self._version >= other._version

__le__ ¤

__le__(other: Any)
Source code in src/bioimageio/spec/_internal/version_type.py
38
39
40
41
def __le__(self, other: Any):
    if not isinstance(other, Version):
        return NotImplemented
    return self._version <= other._version

__lt__ ¤

__lt__(other: Any)
Source code in src/bioimageio/spec/_internal/version_type.py
22
23
24
25
26
def __lt__(self, other: Any):
    if not isinstance(other, Version):
        return NotImplemented

    return self._version < other._version

__str__ ¤

__str__()
Source code in src/bioimageio/spec/_internal/version_type.py
14
15
def __str__(self):
    return str(self._version)

model_post_init ¤

model_post_init(__context: Any) -> None

set _version attribute @private

Source code in src/bioimageio/spec/_internal/version_type.py
17
18
19
20
def model_post_init(self, __context: Any) -> None:
    """set `_version` attribute @private"""
    self._version = packaging.version.Version(str(self.root))
    return super().model_post_init(__context)

WeightsDescr pydantic-model ¤

Bases: Node

Show JSON schema:
{
  "$defs": {
    "ArchitectureFromFileDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "Architecture source file",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "callable": {
          "description": "Identifier of the callable that returns a torch.nn.Module instance.",
          "examples": [
            "MyNetworkClass",
            "get_my_model"
          ],
          "minLength": 1,
          "title": "Identifier",
          "type": "string"
        },
        "kwargs": {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "description": "key word arguments for the `callable`",
          "title": "Kwargs",
          "type": "object"
        }
      },
      "required": [
        "source",
        "callable"
      ],
      "title": "model.v0_5.ArchitectureFromFileDescr",
      "type": "object"
    },
    "ArchitectureFromLibraryDescr": {
      "additionalProperties": false,
      "properties": {
        "callable": {
          "description": "Identifier of the callable that returns a torch.nn.Module instance.",
          "examples": [
            "MyNetworkClass",
            "get_my_model"
          ],
          "minLength": 1,
          "title": "Identifier",
          "type": "string"
        },
        "kwargs": {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "description": "key word arguments for the `callable`",
          "title": "Kwargs",
          "type": "object"
        },
        "import_from": {
          "description": "Where to import the callable from, i.e. `from <import_from> import <callable>`",
          "title": "Import From",
          "type": "string"
        }
      },
      "required": [
        "callable",
        "import_from"
      ],
      "title": "model.v0_5.ArchitectureFromLibraryDescr",
      "type": "object"
    },
    "Author": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "title": "Name",
          "type": "string"
        },
        "github_user": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Github User"
        }
      },
      "required": [
        "name"
      ],
      "title": "generic.v0_3.Author",
      "type": "object"
    },
    "FileDescr": {
      "additionalProperties": false,
      "description": "A file description",
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "File source",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        }
      },
      "required": [
        "source"
      ],
      "title": "_internal.io.FileDescr",
      "type": "object"
    },
    "KerasHdf5WeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "Source of the weights file.",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "authors": {
          "anyOf": [
            {
              "items": {
                "$ref": "#/$defs/Author"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
          "title": "Authors"
        },
        "parent": {
          "anyOf": [
            {
              "enum": [
                "keras_hdf5",
                "keras_v3",
                "onnx",
                "pytorch_state_dict",
                "tensorflow_js",
                "tensorflow_saved_model_bundle",
                "torchscript"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
          "examples": [
            "pytorch_state_dict"
          ],
          "title": "Parent"
        },
        "comment": {
          "default": "",
          "description": "A comment about this weights entry, for example how these weights were created.",
          "title": "Comment",
          "type": "string"
        },
        "tensorflow_version": {
          "$ref": "#/$defs/Version",
          "description": "TensorFlow version used to create these weights."
        }
      },
      "required": [
        "source",
        "tensorflow_version"
      ],
      "title": "model.v0_5.KerasHdf5WeightsDescr",
      "type": "object"
    },
    "KerasV3WeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "Source of the .keras weights file.",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "authors": {
          "anyOf": [
            {
              "items": {
                "$ref": "#/$defs/Author"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
          "title": "Authors"
        },
        "parent": {
          "anyOf": [
            {
              "enum": [
                "keras_hdf5",
                "keras_v3",
                "onnx",
                "pytorch_state_dict",
                "tensorflow_js",
                "tensorflow_saved_model_bundle",
                "torchscript"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
          "examples": [
            "pytorch_state_dict"
          ],
          "title": "Parent"
        },
        "comment": {
          "default": "",
          "description": "A comment about this weights entry, for example how these weights were created.",
          "title": "Comment",
          "type": "string"
        },
        "keras_version": {
          "$ref": "#/$defs/Version",
          "description": "Keras version used to create these weights.",
          "ge": 3
        },
        "backend": {
          "description": "Keras backend used to create these weights.",
          "maxItems": 2,
          "minItems": 2,
          "prefixItems": [
            {
              "enum": [
                "tensorflow",
                "jax",
                "torch"
              ],
              "type": "string"
            },
            {
              "$ref": "#/$defs/Version"
            }
          ],
          "title": "Backend",
          "type": "array"
        }
      },
      "required": [
        "source",
        "keras_version",
        "backend"
      ],
      "title": "model.v0_5.KerasV3WeightsDescr",
      "type": "object"
    },
    "OnnxWeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "Source of the weights file.",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "authors": {
          "anyOf": [
            {
              "items": {
                "$ref": "#/$defs/Author"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
          "title": "Authors"
        },
        "parent": {
          "anyOf": [
            {
              "enum": [
                "keras_hdf5",
                "keras_v3",
                "onnx",
                "pytorch_state_dict",
                "tensorflow_js",
                "tensorflow_saved_model_bundle",
                "torchscript"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
          "examples": [
            "pytorch_state_dict"
          ],
          "title": "Parent"
        },
        "comment": {
          "default": "",
          "description": "A comment about this weights entry, for example how these weights were created.",
          "title": "Comment",
          "type": "string"
        },
        "opset_version": {
          "description": "ONNX opset version",
          "minimum": 7,
          "title": "Opset Version",
          "type": "integer"
        },
        "external_data": {
          "anyOf": [
            {
              "$ref": "#/$defs/FileDescr",
              "examples": [
                {
                  "source": "weights.onnx.data"
                }
              ]
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Source of the external ONNX data file holding the weights.\n(If present **source** holds the ONNX architecture without weights)."
        }
      },
      "required": [
        "source",
        "opset_version"
      ],
      "title": "model.v0_5.OnnxWeightsDescr",
      "type": "object"
    },
    "PytorchStateDictWeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "Source of the weights file.",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "authors": {
          "anyOf": [
            {
              "items": {
                "$ref": "#/$defs/Author"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
          "title": "Authors"
        },
        "parent": {
          "anyOf": [
            {
              "enum": [
                "keras_hdf5",
                "keras_v3",
                "onnx",
                "pytorch_state_dict",
                "tensorflow_js",
                "tensorflow_saved_model_bundle",
                "torchscript"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
          "examples": [
            "pytorch_state_dict"
          ],
          "title": "Parent"
        },
        "comment": {
          "default": "",
          "description": "A comment about this weights entry, for example how these weights were created.",
          "title": "Comment",
          "type": "string"
        },
        "architecture": {
          "anyOf": [
            {
              "$ref": "#/$defs/ArchitectureFromFileDescr"
            },
            {
              "$ref": "#/$defs/ArchitectureFromLibraryDescr"
            }
          ],
          "title": "Architecture"
        },
        "pytorch_version": {
          "$ref": "#/$defs/Version",
          "description": "Version of the PyTorch library used.\nIf `architecture.depencencies` is specified it has to include pytorch and any version pinning has to be compatible."
        },
        "dependencies": {
          "anyOf": [
            {
              "$ref": "#/$defs/FileDescr",
              "examples": [
                {
                  "source": "environment.yaml"
                }
              ]
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Custom depencies beyond pytorch described in a Conda environment file.\nAllows to specify custom dependencies, see conda docs:\n- [Exporting an environment file across platforms](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#exporting-an-environment-file-across-platforms)\n- [Creating an environment file manually](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually)\n\nThe conda environment file should include pytorch and any version pinning has to be compatible with\n**pytorch_version**."
        }
      },
      "required": [
        "source",
        "architecture",
        "pytorch_version"
      ],
      "title": "model.v0_5.PytorchStateDictWeightsDescr",
      "type": "object"
    },
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    },
    "TensorflowJsWeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "authors": {
          "anyOf": [
            {
              "items": {
                "$ref": "#/$defs/Author"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
          "title": "Authors"
        },
        "parent": {
          "anyOf": [
            {
              "enum": [
                "keras_hdf5",
                "keras_v3",
                "onnx",
                "pytorch_state_dict",
                "tensorflow_js",
                "tensorflow_saved_model_bundle",
                "torchscript"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
          "examples": [
            "pytorch_state_dict"
          ],
          "title": "Parent"
        },
        "comment": {
          "default": "",
          "description": "A comment about this weights entry, for example how these weights were created.",
          "title": "Comment",
          "type": "string"
        },
        "tensorflow_version": {
          "$ref": "#/$defs/Version",
          "description": "Version of the TensorFlow library used."
        }
      },
      "required": [
        "source",
        "tensorflow_version"
      ],
      "title": "model.v0_5.TensorflowJsWeightsDescr",
      "type": "object"
    },
    "TensorflowSavedModelBundleWeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "authors": {
          "anyOf": [
            {
              "items": {
                "$ref": "#/$defs/Author"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
          "title": "Authors"
        },
        "parent": {
          "anyOf": [
            {
              "enum": [
                "keras_hdf5",
                "keras_v3",
                "onnx",
                "pytorch_state_dict",
                "tensorflow_js",
                "tensorflow_saved_model_bundle",
                "torchscript"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
          "examples": [
            "pytorch_state_dict"
          ],
          "title": "Parent"
        },
        "comment": {
          "default": "",
          "description": "A comment about this weights entry, for example how these weights were created.",
          "title": "Comment",
          "type": "string"
        },
        "tensorflow_version": {
          "$ref": "#/$defs/Version",
          "description": "Version of the TensorFlow library used."
        },
        "dependencies": {
          "anyOf": [
            {
              "$ref": "#/$defs/FileDescr",
              "examples": [
                {
                  "source": "environment.yaml"
                }
              ]
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Custom dependencies beyond tensorflow.\nShould include tensorflow and any version pinning has to be compatible with **tensorflow_version**."
        }
      },
      "required": [
        "source",
        "tensorflow_version"
      ],
      "title": "model.v0_5.TensorflowSavedModelBundleWeightsDescr",
      "type": "object"
    },
    "TorchscriptWeightsDescr": {
      "additionalProperties": false,
      "properties": {
        "source": {
          "anyOf": [
            {
              "description": "A URL with the HTTP or HTTPS scheme.",
              "format": "uri",
              "maxLength": 2083,
              "minLength": 1,
              "title": "HttpUrl",
              "type": "string"
            },
            {
              "$ref": "#/$defs/RelativeFilePath"
            },
            {
              "format": "file-path",
              "title": "FilePath",
              "type": "string"
            }
          ],
          "description": "Source of the weights file.",
          "title": "Source"
        },
        "sha256": {
          "anyOf": [
            {
              "description": "A SHA-256 hash value",
              "maxLength": 64,
              "minLength": 64,
              "title": "Sha256",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "SHA256 hash value of the **source** file.",
          "title": "Sha256"
        },
        "authors": {
          "anyOf": [
            {
              "items": {
                "$ref": "#/$defs/Author"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
          "title": "Authors"
        },
        "parent": {
          "anyOf": [
            {
              "enum": [
                "keras_hdf5",
                "keras_v3",
                "onnx",
                "pytorch_state_dict",
                "tensorflow_js",
                "tensorflow_saved_model_bundle",
                "torchscript"
              ],
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
          "examples": [
            "pytorch_state_dict"
          ],
          "title": "Parent"
        },
        "comment": {
          "default": "",
          "description": "A comment about this weights entry, for example how these weights were created.",
          "title": "Comment",
          "type": "string"
        },
        "pytorch_version": {
          "$ref": "#/$defs/Version",
          "description": "Version of the PyTorch library used."
        }
      },
      "required": [
        "source",
        "pytorch_version"
      ],
      "title": "model.v0_5.TorchscriptWeightsDescr",
      "type": "object"
    },
    "Version": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        }
      ],
      "description": "wraps a packaging.version.Version instance for validation in pydantic models",
      "title": "Version"
    },
    "YamlValue": {
      "anyOf": [
        {
          "type": "boolean"
        },
        {
          "format": "date",
          "type": "string"
        },
        {
          "format": "date-time",
          "type": "string"
        },
        {
          "type": "integer"
        },
        {
          "type": "number"
        },
        {
          "type": "string"
        },
        {
          "items": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "array"
        },
        {
          "additionalProperties": {
            "$ref": "#/$defs/YamlValue"
          },
          "type": "object"
        },
        {
          "type": "null"
        }
      ]
    }
  },
  "additionalProperties": false,
  "properties": {
    "keras_hdf5": {
      "anyOf": [
        {
          "$ref": "#/$defs/KerasHdf5WeightsDescr"
        },
        {
          "type": "null"
        }
      ],
      "default": null
    },
    "keras_v3": {
      "anyOf": [
        {
          "$ref": "#/$defs/KerasV3WeightsDescr"
        },
        {
          "type": "null"
        }
      ],
      "default": null
    },
    "onnx": {
      "anyOf": [
        {
          "$ref": "#/$defs/OnnxWeightsDescr"
        },
        {
          "type": "null"
        }
      ],
      "default": null
    },
    "pytorch_state_dict": {
      "anyOf": [
        {
          "$ref": "#/$defs/PytorchStateDictWeightsDescr"
        },
        {
          "type": "null"
        }
      ],
      "default": null
    },
    "tensorflow_js": {
      "anyOf": [
        {
          "$ref": "#/$defs/TensorflowJsWeightsDescr"
        },
        {
          "type": "null"
        }
      ],
      "default": null
    },
    "tensorflow_saved_model_bundle": {
      "anyOf": [
        {
          "$ref": "#/$defs/TensorflowSavedModelBundleWeightsDescr"
        },
        {
          "type": "null"
        }
      ],
      "default": null
    },
    "torchscript": {
      "anyOf": [
        {
          "$ref": "#/$defs/TorchscriptWeightsDescr"
        },
        {
          "type": "null"
        }
      ],
      "default": null
    }
  },
  "title": "model.v0_5.WeightsDescr",
  "type": "object"
}

Fields:

Validators:

available_formats property ¤

available_formats: Dict[WeightsFormat, SpecificWeightsDescr]

keras_hdf5 pydantic-field ¤

keras_hdf5: Optional[KerasHdf5WeightsDescr] = None

keras_v3 pydantic-field ¤

keras_v3: Optional[KerasV3WeightsDescr] = None

missing_formats property ¤

missing_formats: Set[WeightsFormat]

onnx pydantic-field ¤

onnx: Optional[OnnxWeightsDescr] = None

pytorch_state_dict pydantic-field ¤

pytorch_state_dict: Optional[
    PytorchStateDictWeightsDescr
] = None

tensorflow_js pydantic-field ¤

tensorflow_js: Optional[TensorflowJsWeightsDescr] = None

tensorflow_saved_model_bundle pydantic-field ¤

tensorflow_saved_model_bundle: Optional[
    TensorflowSavedModelBundleWeightsDescr
] = None

torchscript pydantic-field ¤

torchscript: Optional[TorchscriptWeightsDescr] = None

__getitem__ ¤

__getitem__(key: WeightsFormat)
Source code in src/bioimageio/spec/model/v0_5.py
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
def __getitem__(
    self,
    key: WeightsFormat,
):
    if key == "keras_hdf5":
        ret = self.keras_hdf5
    elif key == "keras_v3":
        ret = self.keras_v3
    elif key == "onnx":
        ret = self.onnx
    elif key == "pytorch_state_dict":
        ret = self.pytorch_state_dict
    elif key == "tensorflow_js":
        ret = self.tensorflow_js
    elif key == "tensorflow_saved_model_bundle":
        ret = self.tensorflow_saved_model_bundle
    elif key == "torchscript":
        ret = self.torchscript
    else:
        raise KeyError(key)

    if ret is None:
        raise KeyError(key)

    return ret

__setitem__ ¤

__setitem__(
    key: Literal["keras_hdf5"],
    value: Optional[KerasHdf5WeightsDescr],
) -> None
__setitem__(
    key: Literal["keras_v3"],
    value: Optional[KerasV3WeightsDescr],
) -> None
__setitem__(
    key: Literal["onnx"], value: Optional[OnnxWeightsDescr]
) -> None
__setitem__(
    key: Literal["pytorch_state_dict"],
    value: Optional[PytorchStateDictWeightsDescr],
) -> None
__setitem__(
    key: Literal["tensorflow_js"],
    value: Optional[TensorflowJsWeightsDescr],
) -> None
__setitem__(
    key: Literal["tensorflow_saved_model_bundle"],
    value: Optional[TensorflowSavedModelBundleWeightsDescr],
) -> None
__setitem__(
    key: Literal["torchscript"],
    value: Optional[TorchscriptWeightsDescr],
) -> None
__setitem__(
    key: WeightsFormat,
    value: Optional[SpecificWeightsDescr],
)
Source code in src/bioimageio/spec/model/v0_5.py
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
def __setitem__(
    self,
    key: WeightsFormat,
    value: Optional[SpecificWeightsDescr],
):
    if key == "keras_hdf5":
        if value is not None and not isinstance(value, KerasHdf5WeightsDescr):
            raise TypeError(
                f"Expected KerasHdf5WeightsDescr or None for key 'keras_hdf5', got {type(value)}"
            )
        self.keras_hdf5 = value
    elif key == "keras_v3":
        if value is not None and not isinstance(value, KerasV3WeightsDescr):
            raise TypeError(
                f"Expected KerasV3WeightsDescr or None for key 'keras_v3', got {type(value)}"
            )
        self.keras_v3 = value
    elif key == "onnx":
        if value is not None and not isinstance(value, OnnxWeightsDescr):
            raise TypeError(
                f"Expected OnnxWeightsDescr or None for key 'onnx', got {type(value)}"
            )
        self.onnx = value
    elif key == "pytorch_state_dict":
        if value is not None and not isinstance(
            value, PytorchStateDictWeightsDescr
        ):
            raise TypeError(
                f"Expected PytorchStateDictWeightsDescr or None for key 'pytorch_state_dict', got {type(value)}"
            )
        self.pytorch_state_dict = value
    elif key == "tensorflow_js":
        if value is not None and not isinstance(value, TensorflowJsWeightsDescr):
            raise TypeError(
                f"Expected TensorflowJsWeightsDescr or None for key 'tensorflow_js', got {type(value)}"
            )
        self.tensorflow_js = value
    elif key == "tensorflow_saved_model_bundle":
        if value is not None and not isinstance(
            value, TensorflowSavedModelBundleWeightsDescr
        ):
            raise TypeError(
                f"Expected TensorflowSavedModelBundleWeightsDescr or None for key 'tensorflow_saved_model_bundle', got {type(value)}"
            )
        self.tensorflow_saved_model_bundle = value
    elif key == "torchscript":
        if value is not None and not isinstance(value, TorchscriptWeightsDescr):
            raise TypeError(
                f"Expected TorchscriptWeightsDescr or None for key 'torchscript', got {type(value)}"
            )
        self.torchscript = value
    else:
        raise KeyError(key)

check_entries pydantic-validator ¤

check_entries() -> Self
Source code in src/bioimageio/spec/model/v0_5.py
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
@model_validator(mode="after")
def check_entries(self) -> Self:
    entries = {wtype for wtype, entry in self if entry is not None}

    if not entries:
        raise ValueError("Missing weights entry")

    entries_wo_parent = {
        wtype
        for wtype, entry in self
        if entry is not None and hasattr(entry, "parent") and entry.parent is None
    }
    if len(entries_wo_parent) != 1:
        issue_warning(
            "Exactly one weights entry may not specify the `parent` field (got"
            + " {value}). That entry is considered the original set of model weights."
            + " Other weight formats are created through conversion of the orignal or"
            + " already converted weights. They have to reference the weights format"
            + " they were converted from as their `parent`.",
            value=len(entries_wo_parent),
            field="weights",
        )

    for wtype, entry in self:
        if entry is None:
            continue

        assert hasattr(entry, "type")
        assert hasattr(entry, "parent")
        assert wtype == entry.type
        if (
            entry.parent is not None and entry.parent not in entries
        ):  # self reference checked for `parent` field
            raise ValueError(
                f"`weights.{wtype}.parent={entry.parent} not in specified weight"
                + f" formats: {entries}"
            )

    return self

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

WeightsEntryDescrBase pydantic-model ¤

Bases: FileDescr

Show JSON schema:
{
  "$defs": {
    "Author": {
      "additionalProperties": false,
      "properties": {
        "affiliation": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Affiliation",
          "title": "Affiliation"
        },
        "email": {
          "anyOf": [
            {
              "format": "email",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Email",
          "title": "Email"
        },
        "orcid": {
          "anyOf": [
            {
              "description": "An ORCID identifier, see https://orcid.org/",
              "title": "OrcidId",
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
          "examples": [
            "0000-0001-2345-6789"
          ],
          "title": "Orcid"
        },
        "name": {
          "title": "Name",
          "type": "string"
        },
        "github_user": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "title": "Github User"
        }
      },
      "required": [
        "name"
      ],
      "title": "generic.v0_3.Author",
      "type": "object"
    },
    "RelativeFilePath": {
      "description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
      "format": "path",
      "title": "RelativeFilePath",
      "type": "string"
    }
  },
  "additionalProperties": false,
  "properties": {
    "source": {
      "anyOf": [
        {
          "description": "A URL with the HTTP or HTTPS scheme.",
          "format": "uri",
          "maxLength": 2083,
          "minLength": 1,
          "title": "HttpUrl",
          "type": "string"
        },
        {
          "$ref": "#/$defs/RelativeFilePath"
        },
        {
          "format": "file-path",
          "title": "FilePath",
          "type": "string"
        }
      ],
      "description": "Source of the weights file.",
      "title": "Source"
    },
    "sha256": {
      "anyOf": [
        {
          "description": "A SHA-256 hash value",
          "maxLength": 64,
          "minLength": 64,
          "title": "Sha256",
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "SHA256 hash value of the **source** file.",
      "title": "Sha256"
    },
    "authors": {
      "anyOf": [
        {
          "items": {
            "$ref": "#/$defs/Author"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n    (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n    (If this is a child weight, i.e. it has a `parent` field)",
      "title": "Authors"
    },
    "parent": {
      "anyOf": [
        {
          "enum": [
            "keras_hdf5",
            "keras_v3",
            "onnx",
            "pytorch_state_dict",
            "tensorflow_js",
            "tensorflow_saved_model_bundle",
            "torchscript"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
      "examples": [
        "pytorch_state_dict"
      ],
      "title": "Parent"
    },
    "comment": {
      "default": "",
      "description": "A comment about this weights entry, for example how these weights were created.",
      "title": "Comment",
      "type": "string"
    }
  },
  "required": [
    "source"
  ],
  "title": "model.v0_5.WeightsEntryDescrBase",
  "type": "object"
}

Fields:

Validators:

  • _validate_sha256
  • _validate

authors pydantic-field ¤

authors: Optional[List[Author]] = None

Authors Either the person(s) that have trained this model resulting in the original weights file. (If this is the initial weights entry, i.e. it does not have a parent) Or the person(s) who have converted the weights to this weights format. (If this is a child weight, i.e. it has a parent field)

comment pydantic-field ¤

comment: str = ''

A comment about this weights entry, for example how these weights were created.

parent pydantic-field ¤

parent: Annotated[
    Optional[WeightsFormat],
    Field(examples=["pytorch_state_dict"]),
] = None

The source weights these weights were converted from. For example, if a model's weights were converted from the pytorch_state_dict format to torchscript, The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights. All weight entries except one (the initial set of weights resulting from training the model), need to have this field.

sha256 pydantic-field ¤

sha256: Optional[Sha256] = None

SHA256 hash value of the source file.

source pydantic-field ¤

source: Annotated[
    FileSource, AfterValidator(wo_special_file_name)
]

Source of the weights file.

suffix property ¤

suffix: str

type class-attribute ¤

weights_format_name class-attribute ¤

weights_format_name: str

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

download ¤

download(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

alias for .get_reader

Source code in src/bioimageio/spec/_internal/io.py
319
320
321
322
323
324
325
326
327
def download(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """alias for `.get_reader`"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

get_reader ¤

get_reader(
    *,
    progressbar: Union[
        ProgressbarLike,
        Callable[[], ProgressbarLike],
        bool,
        None,
    ] = None,
)

open the file source (download if needed)

Source code in src/bioimageio/spec/_internal/io.py
309
310
311
312
313
314
315
316
317
def get_reader(
    self,
    *,
    progressbar: Union[
        ProgressbarLike, Callable[[], ProgressbarLike], bool, None
    ] = None,
):
    """open the file source (download if needed)"""
    return get_reader(self.source, progressbar=progressbar, sha256=self.sha256)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

validate_sha256 ¤

validate_sha256(force_recompute: bool = False) -> None

validate the sha256 hash value of the source file

Source code in src/bioimageio/spec/_internal/io.py
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def validate_sha256(self, force_recompute: bool = False) -> None:
    """validate the sha256 hash value of the **source** file"""
    context = get_validation_context()
    src_str = str(self.source)
    if force_recompute:
        actual_sha = None
    else:
        actual_sha = context.known_files.get(src_str)

    if actual_sha is None:
        if context.perform_io_checks or force_recompute:
            reader = get_reader(self.source, sha256=self.sha256)
            if force_recompute:
                actual_sha = get_sha256(reader)
            else:
                actual_sha = reader.sha256

            context.known_files[src_str] = actual_sha
        elif context.known_files and src_str not in context.known_files:
            # perform_io_checks is False, but known files were given,
            # so we expect all file references to be in there
            raise ValueError(f"File {src_str} not found in `known_files`.")

    if actual_sha is None or self.sha256 == actual_sha:
        return
    elif self.sha256 is None or context.update_hashes:
        self.sha256 = actual_sha
    elif self.sha256 != actual_sha:
        raise ValueError(
            f"Sha256 mismatch for {self.source}. Expected {self.sha256}, got "
            + f"{actual_sha}. Update expected `sha256` or point to the matching "
            + "file."
        )

WithHalo pydantic-model ¤

Bases: Node

Show JSON schema:
{
  "$defs": {
    "SizeReference": {
      "additionalProperties": false,
      "description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n    `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n...     id=AxisId(\"h\"),\n...     size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n...     unit=\"millimeter\",\n...     scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
      "properties": {
        "tensor_id": {
          "description": "tensor id of the reference axis",
          "maxLength": 32,
          "minLength": 1,
          "title": "TensorId",
          "type": "string"
        },
        "axis_id": {
          "description": "axis id of the reference axis",
          "maxLength": 16,
          "minLength": 1,
          "title": "AxisId",
          "type": "string"
        },
        "offset": {
          "default": 0,
          "title": "Offset",
          "type": "integer"
        }
      },
      "required": [
        "tensor_id",
        "axis_id"
      ],
      "title": "model.v0_5.SizeReference",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "properties": {
    "halo": {
      "description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
      "minimum": 1,
      "title": "Halo",
      "type": "integer"
    },
    "size": {
      "$ref": "#/$defs/SizeReference",
      "description": "reference to another axis with an optional offset (see [SizeReference][])",
      "examples": [
        10,
        {
          "axis_id": "a",
          "offset": 5,
          "tensor_id": "t"
        }
      ]
    }
  },
  "required": [
    "halo",
    "size"
  ],
  "title": "model.v0_5.WithHalo",
  "type": "object"
}

Fields:

halo pydantic-field ¤

halo: Annotated[int, Ge(1)]

The halo should be cropped from the output tensor to avoid boundary effects. It is to be cropped from both sides, i.e. size_after_crop = size - 2 * halo. To document a halo that is already cropped by the model use size.offset instead.

size pydantic-field ¤

size: Annotated[
    SizeReference,
    Field(
        examples=[
            10,
            SizeReference(
                tensor_id=TensorId("t"),
                axis_id=AxisId("a"),
                offset=5,
            ).model_dump(mode="json"),
        ]
    ),
]

reference to another axis with an optional offset (see SizeReference)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

ZeroMeanUnitVarianceDescr pydantic-model ¤

Bases: NodeWithExplicitlySetFields

Subtract mean and divide by variance.

Examples:

Subtract tensor mean and variance - in YAML

preprocessing:
  - id: zero_mean_unit_variance
- in Python

>>> preprocessing = [ZeroMeanUnitVarianceDescr()]
Show JSON schema:
{
  "$defs": {
    "ZeroMeanUnitVarianceKwargs": {
      "additionalProperties": false,
      "description": "key word arguments for [ZeroMeanUnitVarianceDescr][]",
      "properties": {
        "axes": {
          "anyOf": [
            {
              "items": {
                "maxLength": 16,
                "minLength": 1,
                "title": "AxisId",
                "type": "string"
              },
              "type": "array"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
          "examples": [
            [
              "batch",
              "x",
              "y"
            ]
          ],
          "title": "Axes"
        },
        "eps": {
          "default": 1e-06,
          "description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
          "exclusiveMinimum": 0,
          "maximum": 0.1,
          "title": "Eps",
          "type": "number"
        }
      },
      "title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "description": "Subtract mean and divide by variance.\n\nExamples:\n    Subtract tensor mean and variance\n    - in YAML\n    ```yaml\n    preprocessing:\n      - id: zero_mean_unit_variance\n    ```\n    - in Python\n    >>> preprocessing = [ZeroMeanUnitVarianceDescr()]",
  "properties": {
    "id": {
      "const": "zero_mean_unit_variance",
      "title": "Id",
      "type": "string"
    },
    "kwargs": {
      "$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
    }
  },
  "required": [
    "id"
  ],
  "title": "model.v0_5.ZeroMeanUnitVarianceDescr",
  "type": "object"
}

Fields:

id pydantic-field ¤

id: Literal["zero_mean_unit_variance"] = (
    "zero_mean_unit_variance"
)

implemented_id class-attribute ¤

implemented_id: Literal["zero_mean_unit_variance"] = (
    "zero_mean_unit_variance"
)

kwargs pydantic-field ¤

__pydantic_init_subclass__ classmethod ¤

__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    explict_fields: Dict[str, Any] = {}
    for attr in dir(cls):
        if attr.startswith("implemented_"):
            field_name = attr.replace("implemented_", "")
            if field_name not in cls.model_fields:
                continue

            assert (
                cls.model_fields[field_name].get_default() is PydanticUndefined
            ), field_name
            default = getattr(cls, attr)
            explict_fields[field_name] = default

    cls._fields_to_set_explicitly = MappingProxyType(explict_fields)
    return super().__pydantic_init_subclass__(**kwargs)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

ZeroMeanUnitVarianceKwargs pydantic-model ¤

Bases: KwargsNode

key word arguments for ZeroMeanUnitVarianceDescr

Show JSON schema:
{
  "additionalProperties": false,
  "description": "key word arguments for [ZeroMeanUnitVarianceDescr][]",
  "properties": {
    "axes": {
      "anyOf": [
        {
          "items": {
            "maxLength": 16,
            "minLength": 1,
            "title": "AxisId",
            "type": "string"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
      "examples": [
        [
          "batch",
          "x",
          "y"
        ]
      ],
      "title": "Axes"
    },
    "eps": {
      "default": 1e-06,
      "description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
      "exclusiveMinimum": 0,
      "maximum": 0.1,
      "title": "Eps",
      "type": "number"
    }
  },
  "title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
  "type": "object"
}

Fields:

  • axes (Annotated[Optional[Sequence[AxisId]], Field(examples=[('batch', 'x', 'y')])])
  • eps (Annotated[float, Interval(gt=0, le=0.1)])

axes pydantic-field ¤

axes: Annotated[
    Optional[Sequence[AxisId]],
    Field(examples=[("batch", "x", "y")]),
] = None

The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std. For example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x') resulting in a tensor of equal shape normalized per channel, specify axes=('batch', 'x', 'y'). To normalize each sample independently leave out the 'batch' axis. Default: Scale all axes jointly.

eps pydantic-field ¤

eps: Annotated[float, Interval(gt=0, le=0.1)] = 1e-06

epsilon for numeric stability: out = (tensor - mean) / (std + eps).

__contains__ ¤

__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
459
460
def __contains__(self, item: str) -> bool:
    return item in self.__class__.model_fields

__getitem__ ¤

__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
453
454
455
456
457
def __getitem__(self, item: str) -> Any:
    if item in self.__class__.model_fields:
        return getattr(self, item)
    else:
        raise KeyError(item)

dict_from_kwargs classmethod ¤

dict_from_kwargs(
    *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node.py
 95
 96
 97
 98
 99
100
@classmethod
def dict_from_kwargs(
    cls: Callable[P, T], *args: P.args, **kwargs: P.kwargs
) -> Dict[str, Any]:
    assert not args, "Did not expected any args"
    return dict(kwargs)

get ¤

get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
450
451
def get(self, item: str, default: Any = None) -> Any:
    return self[item] if item in self else default

model_validate classmethod ¤

model_validate(
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[
        Literal["allow", "ignore", "forbid"]
    ] = None,
    from_attributes: Optional[bool] = None,
    context: Union[
        ValidationContext, Mapping[str, Any], None
    ] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self

Validate a pydantic model instance.

Parameters:

Name Type Description Default

obj ¤

Union[Any, Mapping[str, Any]]

The object to validate.

required

strict ¤

Optional[bool]

Whether to raise an exception on invalid fields.

None

from_attributes ¤

Optional[bool]

Whether to extract data from object attributes.

None

context ¤

Union[ValidationContext, Mapping[str, Any], None]

Additional context to pass to the validator.

None

Raises:

Type Description
ValidationError

If the object failed validation.

Returns:

Type Description
Self

The validated description instance.

Source code in src/bioimageio/spec/_internal/node.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
@classmethod
def model_validate(
    cls,
    obj: Union[Any, Mapping[str, Any]],
    *,
    strict: Optional[bool] = None,
    extra: Optional[Literal["allow", "ignore", "forbid"]] = None,
    from_attributes: Optional[bool] = None,
    context: Union[ValidationContext, Mapping[str, Any], None] = None,
    by_alias: Optional[bool] = None,
    by_name: Optional[bool] = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to raise an exception on invalid fields.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.

    Raises:
        ValidationError: If the object failed validation.

    Returns:
        The validated description instance.
    """
    __tracebackhide__ = True

    if context is None:
        context = get_validation_context()
    elif isinstance(context, collections.abc.Mapping):
        context = ValidationContext(**context)

    assert not isinstance(obj, collections.abc.Mapping) or is_kwargs(obj), obj

    # TODO: pass on extra with pydantic >=2.12
    if extra is not None:
        warnings.warn("`extra` argument is currently ignored")

    with context:
        # use validation context as context manager for equal behavior of __init__ and model_validate
        return super().model_validate(
            obj, strict=strict, from_attributes=from_attributes
        )

convert_axes ¤

convert_axes(
    axes: str,
    *,
    shape: Union[
        Sequence[int],
        _ParameterizedInputShape_v0_4,
        _ImplicitOutputShape_v0_4,
    ],
    tensor_type: Literal["input", "output"],
    halo: Optional[Sequence[int]],
    size_refs: Mapping[_TensorName_v0_4, Mapping[str, int]],
)
Source code in src/bioimageio/spec/model/v0_5.py
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
def convert_axes(
    axes: str,
    *,
    shape: Union[
        Sequence[int], _ParameterizedInputShape_v0_4, _ImplicitOutputShape_v0_4
    ],
    tensor_type: Literal["input", "output"],
    halo: Optional[Sequence[int]],
    size_refs: Mapping[_TensorName_v0_4, Mapping[str, int]],
):
    ret: List[AnyAxis] = []
    for i, a in enumerate(axes):
        axis_type = _AXIS_TYPE_MAP.get(a, a)
        if axis_type == "batch":
            ret.append(BatchAxis())
            continue

        scale = 1.0
        if isinstance(shape, _ParameterizedInputShape_v0_4):
            if shape.step[i] == 0:
                size = shape.min[i]
            else:
                size = ParameterizedSize(min=shape.min[i], step=shape.step[i])
        elif isinstance(shape, _ImplicitOutputShape_v0_4):
            ref_t = str(shape.reference_tensor)
            if ref_t.count(".") == 1:
                t_id, orig_a_id = ref_t.split(".")
            else:
                t_id = ref_t
                orig_a_id = a

            a_id = _AXIS_ID_MAP.get(orig_a_id, a)
            if not (orig_scale := shape.scale[i]):
                # old way to insert a new axis dimension
                size = int(2 * shape.offset[i])
            else:
                scale = 1 / orig_scale
                if axis_type in ("channel", "index"):
                    # these axes no longer have a scale
                    offset_from_scale = orig_scale * size_refs.get(
                        _TensorName_v0_4(t_id), {}
                    ).get(orig_a_id, 0)
                else:
                    offset_from_scale = 0
                size = SizeReference(
                    tensor_id=TensorId(t_id),
                    axis_id=AxisId(a_id),
                    offset=int(offset_from_scale + 2 * shape.offset[i]),
                )
        else:
            size = shape[i]

        if axis_type == "time":
            if tensor_type == "input":
                ret.append(TimeInputAxis(size=size, scale=scale))
            else:
                assert not isinstance(size, ParameterizedSize)
                if halo is None:
                    ret.append(TimeOutputAxis(size=size, scale=scale))
                else:
                    assert not isinstance(size, int)
                    ret.append(
                        TimeOutputAxisWithHalo(size=size, scale=scale, halo=halo[i])
                    )

        elif axis_type == "index":
            if tensor_type == "input":
                ret.append(IndexInputAxis(size=size))
            else:
                if isinstance(size, ParameterizedSize):
                    size = DataDependentSize(min=size.min)

                ret.append(IndexOutputAxis(size=size))
        elif axis_type == "channel":
            assert not isinstance(size, ParameterizedSize)
            if isinstance(size, SizeReference):
                warnings.warn(
                    "Conversion of channel size from an implicit output shape may be"
                    + " wrong"
                )
                ret.append(
                    ChannelAxis(
                        channel_names=[
                            Identifier(f"channel{i}") for i in range(size.offset)
                        ]
                    )
                )
            else:
                ret.append(
                    ChannelAxis(
                        channel_names=[Identifier(f"channel{i}") for i in range(size)]
                    )
                )
        elif axis_type == "space":
            if tensor_type == "input":
                ret.append(SpaceInputAxis(id=AxisId(a), size=size, scale=scale))
            else:
                assert not isinstance(size, ParameterizedSize)
                if halo is None or halo[i] == 0:
                    ret.append(SpaceOutputAxis(id=AxisId(a), size=size, scale=scale))
                elif isinstance(size, int):
                    raise NotImplementedError(
                        f"output axis with halo and fixed size (here {size}) not allowed"
                    )
                else:
                    ret.append(
                        SpaceOutputAxisWithHalo(
                            id=AxisId(a), size=size, scale=scale, halo=halo[i]
                        )
                    )

    return ret

generate_covers ¤

generate_covers(
    inputs: Sequence[Tuple[InputTensorDescr, NDArray[Any]]],
    outputs: Sequence[
        Tuple[OutputTensorDescr, NDArray[Any]]
    ],
) -> List[Path]
Source code in src/bioimageio/spec/model/v0_5.py
4145
4146
4147
4148
4149
4150
4151
4152
4153
4154
4155
4156
4157
4158
4159
4160
4161
4162
4163
4164
4165
4166
4167
4168
4169
4170
4171
4172
4173
4174
4175
4176
4177
4178
4179
4180
4181
4182
4183
4184
4185
4186
4187
4188
4189
4190
4191
4192
4193
4194
4195
4196
4197
4198
4199
4200
4201
4202
4203
4204
4205
4206
4207
4208
4209
4210
4211
4212
4213
4214
4215
4216
4217
4218
4219
4220
4221
4222
4223
4224
4225
4226
4227
4228
4229
4230
4231
4232
4233
4234
4235
4236
4237
4238
4239
4240
4241
4242
4243
4244
4245
4246
4247
4248
4249
4250
4251
4252
4253
4254
4255
4256
4257
4258
4259
4260
4261
4262
4263
4264
4265
4266
4267
4268
4269
4270
4271
4272
4273
4274
4275
4276
4277
4278
4279
4280
4281
4282
4283
4284
4285
4286
4287
4288
4289
4290
4291
4292
4293
4294
4295
4296
4297
4298
4299
4300
4301
4302
4303
4304
4305
4306
4307
4308
4309
4310
4311
4312
4313
4314
4315
4316
4317
4318
4319
4320
4321
4322
4323
4324
4325
4326
4327
4328
4329
4330
def generate_covers(
    inputs: Sequence[Tuple[InputTensorDescr, NDArray[Any]]],
    outputs: Sequence[Tuple[OutputTensorDescr, NDArray[Any]]],
) -> List[Path]:
    def squeeze(
        data: NDArray[Any], axes: Sequence[AnyAxis]
    ) -> Tuple[NDArray[Any], List[AnyAxis]]:
        """apply numpy.ndarray.squeeze while keeping track of the axis descriptions remaining"""
        if data.ndim != len(axes):
            raise ValueError(
                f"tensor shape {data.shape} does not match described axes"
                + f" {[a.id for a in axes]}"
            )

        axes = [deepcopy(a) for a, s in zip(axes, data.shape) if s != 1]
        return data.squeeze(), axes

    def normalize(
        data: NDArray[Any], axis: Optional[Tuple[int, ...]], eps: float = 1e-7
    ) -> NDArray[np.float32]:
        data = data.astype("float32")
        data -= data.min(axis=axis, keepdims=True)
        data /= data.max(axis=axis, keepdims=True) + eps
        return data

    def to_2d_image(data: NDArray[Any], axes: Sequence[AnyAxis]):
        original_shape = data.shape
        original_axes = list(axes)
        data, axes = squeeze(data, axes)

        # take slice fom any batch or index axis if needed
        # and convert the first channel axis and take a slice from any additional channel axes
        slices: Tuple[slice, ...] = ()
        ndim = data.ndim
        ndim_need = 3 if any(isinstance(a, ChannelAxis) for a in axes) else 2
        has_c_axis = False
        for i, a in enumerate(axes):
            s = data.shape[i]
            assert s > 1
            if (
                isinstance(a, (BatchAxis, IndexInputAxis, IndexOutputAxis))
                and ndim > ndim_need
            ):
                data = data[slices + (slice(s // 2 - 1, s // 2),)]
                ndim -= 1
            elif isinstance(a, ChannelAxis):
                if has_c_axis:
                    # second channel axis
                    data = data[slices + (slice(0, 1),)]
                    ndim -= 1
                else:
                    has_c_axis = True
                    if s == 2:
                        # visualize two channels with cyan and magenta
                        data = np.concatenate(
                            [
                                data[slices + (slice(1, 2),)],
                                data[slices + (slice(0, 1),)],
                                (
                                    data[slices + (slice(0, 1),)]
                                    + data[slices + (slice(1, 2),)]
                                )
                                / 2,  # TODO: take maximum instead?
                            ],
                            axis=i,
                        )
                    elif data.shape[i] == 3:
                        pass  # visualize 3 channels as RGB
                    else:
                        # visualize first 3 channels as RGB
                        data = data[slices + (slice(3),)]

                    assert data.shape[i] == 3

            slices += (slice(None),)

        data, axes = squeeze(data, axes)
        assert len(axes) == ndim
        # take slice from z axis if needed
        slices = ()
        if ndim > ndim_need:
            for i, a in enumerate(axes):
                s = data.shape[i]
                if a.id == AxisId("z"):
                    data = data[slices + (slice(s // 2 - 1, s // 2),)]
                    data, axes = squeeze(data, axes)
                    ndim -= 1
                    break

            slices += (slice(None),)

        # take slice from any space or time axis
        slices = ()

        for i, a in enumerate(axes):
            if ndim <= ndim_need:
                break

            s = data.shape[i]
            assert s > 1
            if isinstance(
                a, (SpaceInputAxis, SpaceOutputAxis, TimeInputAxis, TimeOutputAxis)
            ):
                data = data[slices + (slice(s // 2 - 1, s // 2),)]
                ndim -= 1

            slices += (slice(None),)

        del slices
        data, axes = squeeze(data, axes)
        assert len(axes) == ndim

        if (has_c_axis and ndim != 3) or (not has_c_axis and ndim != 2):
            raise ValueError(
                f"Failed to construct cover image from shape {original_shape} with axes {[a.id for a in original_axes]}."
            )

        if not has_c_axis:
            assert ndim == 2
            data = np.repeat(data[:, :, None], 3, axis=2)
            axes.append(ChannelAxis(channel_names=list(map(Identifier, "RGB"))))
            ndim += 1

        assert ndim == 3

        # transpose axis order such that longest axis comes first...
        axis_order: List[int] = list(np.argsort(list(data.shape)))
        axis_order.reverse()
        # ... and channel axis is last
        c = [i for i in range(3) if isinstance(axes[i], ChannelAxis)][0]
        axis_order.append(axis_order.pop(c))
        axes = [axes[ao] for ao in axis_order]
        data = data.transpose(axis_order)

        # h, w = data.shape[:2]
        # if h / w  in (1.0 or 2.0):
        #     pass
        # elif h / w < 2:
        # TODO: enforce 2:1 or 1:1 aspect ratio for generated cover images

        norm_along = (
            tuple(i for i, a in enumerate(axes) if a.type in ("space", "time")) or None
        )
        # normalize the data and map to 8 bit
        data = normalize(data, norm_along)
        data = (data * 255).astype("uint8")

        return data

    def create_diagonal_split_image(im0: NDArray[Any], im1: NDArray[Any]):
        assert im0.dtype == im1.dtype == np.uint8
        assert im0.shape == im1.shape
        assert im0.ndim == 3
        N, M, C = im0.shape
        assert C == 3
        out = np.ones((N, M, C), dtype="uint8")
        for c in range(C):
            outc = np.tril(im0[..., c])
            mask = outc == 0
            outc[mask] = np.triu(im1[..., c])[mask]
            out[..., c] = outc

        return out

    if not inputs:
        raise ValueError("Missing test input tensor for cover generation.")

    if not outputs:
        raise ValueError("Missing test output tensor for cover generation.")

    ipt_descr, ipt = inputs[0]
    out_descr, out = outputs[0]

    ipt_img = to_2d_image(ipt, ipt_descr.axes)
    out_img = to_2d_image(out, out_descr.axes)

    cover_folder = Path(mkdtemp())
    if ipt_img.shape == out_img.shape:
        covers = [cover_folder / "cover.png"]
        imwrite(covers[0], create_diagonal_split_image(ipt_img, out_img))
    else:
        covers = [cover_folder / "input.png", cover_folder / "output.png"]
        imwrite(covers[0], ipt_img)
        imwrite(covers[1], out_img)

    return covers

validate_tensors ¤

validate_tensors(
    tensors: Mapping[
        TensorId, Tuple[TensorDescr, Optional[NDArray[Any]]]
    ],
    tensor_origin: Literal[
        "source", "test_tensor"
    ] = "source",
)
Source code in src/bioimageio/spec/model/v0_5.py
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
def validate_tensors(
    tensors: Mapping[TensorId, Tuple[TensorDescr, Optional[NDArray[Any]]]],
    tensor_origin: Literal[
        "source", "test_tensor"
    ] = "source",  # for more precise error messages
):
    all_tensor_axes: Dict[TensorId, Dict[AxisId, Tuple[AnyAxis, Optional[int]]]] = {}

    def e_msg_location(d: TensorDescr):
        return f"{'inputs' if isinstance(d, InputTensorDescr) else 'outputs'}[{d.id}]"

    for descr, array in tensors.values():
        if array is None:
            axis_sizes = {a.id: None for a in descr.axes}
        else:
            try:
                axis_sizes = descr.get_axis_sizes_for_array(array)
            except ValueError as e:
                raise ValueError(f"{e_msg_location(descr)} {e}")

        all_tensor_axes[descr.id] = {a.id: (a, axis_sizes[a.id]) for a in descr.axes}

    for descr, array in tensors.values():
        if array is None:
            continue

        if descr.dtype in ("float32", "float64"):
            invalid_test_tensor_dtype = array.dtype.name not in (
                "float32",
                "float64",
                "uint8",
                "int8",
                "uint16",
                "int16",
                "uint32",
                "int32",
                "uint64",
                "int64",
            )
        else:
            invalid_test_tensor_dtype = array.dtype.name != descr.dtype

        if invalid_test_tensor_dtype:
            raise ValueError(
                f"{tensor_origin} data type '{array.dtype.name}' does not"
                + f" match described {e_msg_location(descr)}.dtype '{descr.dtype}'"
            )

        if array.min() > -1e-4 and array.max() < 1e-4:
            raise ValueError(
                "Output values are too small for reliable testing."
                + f" Values <-1e5 or >=1e5 must be present in {tensor_origin}"
            )

        for a in descr.axes:
            actual_size = all_tensor_axes[descr.id][a.id][1]
            if actual_size is None:
                continue

            if a.size is None:
                continue

            if isinstance(a.size, int):
                if actual_size != a.size:
                    raise ValueError(
                        f"{e_msg_location(descr)}.axes[{a.id}]: {tensor_origin} axis "
                        + f"has incompatible size {actual_size}, expected {a.size}"
                    )
            elif isinstance(a.size, ParameterizedSize):
                _ = a.size.validate_size(
                    actual_size,
                    f"{e_msg_location(descr)}.axes[{a.id}]: {tensor_origin} axis ",
                )
            elif isinstance(a.size, DataDependentSize):
                _ = a.size.validate_size(
                    actual_size,
                    f"{e_msg_location(descr)}.axes[{a.id}]: {tensor_origin} axis ",
                )
            elif isinstance(a.size, SizeReference):
                ref_tensor_axes = all_tensor_axes.get(a.size.tensor_id)
                if ref_tensor_axes is None:
                    raise ValueError(
                        f"{e_msg_location(descr)}.axes[{a.id}].size.tensor_id: Unknown tensor"
                        + f" reference '{a.size.tensor_id}', available: {list(all_tensor_axes)}"
                    )

                ref_axis, ref_size = ref_tensor_axes.get(a.size.axis_id, (None, None))
                if ref_axis is None or ref_size is None:
                    raise ValueError(
                        f"{e_msg_location(descr)}.axes[{a.id}].size.axis_id: Unknown tensor axis"
                        + f" reference '{a.size.tensor_id}.{a.size.axis_id}, available: {list(ref_tensor_axes)}"
                    )

                if a.unit != ref_axis.unit:
                    raise ValueError(
                        f"{e_msg_location(descr)}.axes[{a.id}].size: `SizeReference` requires"
                        + " axis and reference axis to have the same `unit`, but"
                        + f" {a.unit}!={ref_axis.unit}"
                    )

                if actual_size != (
                    expected_size := (
                        ref_size * ref_axis.scale / a.scale + a.size.offset
                    )
                ):
                    raise ValueError(
                        f"{e_msg_location(descr)}.{tensor_origin}: axis '{a.id}' of size"
                        + f" {actual_size} invalid for referenced size {ref_size};"
                        + f" expected {expected_size}"
                    )
            else:
                assert_never(a.size)