v0_5
¤
-
API Reference
model
Classes:
| Name | Description |
|---|---|
ArchitectureFromFileDescr |
|
ArchitectureFromLibraryDescr |
|
Author |
|
AxisBase |
|
AxisId |
|
BadgeDescr |
A custom badge |
BatchAxis |
|
BiasRisksLimitations |
Known biases, risks, technical limitations, and recommendations for model use. |
BinarizeAlongAxisKwargs |
key word arguments for [BinarizeDescr][] |
BinarizeDescr |
Binarize the tensor with a fixed threshold. |
BinarizeKwargs |
key word arguments for [BinarizeDescr][] |
BioimageioConfig |
|
CallableFromDepencency |
|
ChannelAxis |
|
CiteEntry |
A citation that should be referenced in work using this resource. |
ClipDescr |
Set tensor values below min to min and above max to max. |
ClipKwargs |
key word arguments for [ClipDescr][] |
Config |
|
DataDependentSize |
|
DatasetDescr |
A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage |
DatasetId |
|
Datetime |
Timestamp in ISO 8601 format |
DeprecatedLicenseId |
|
Doi |
A digital object identifier, see https://www.doi.org/ |
EnsureDtypeDescr |
Cast the tensor data type to |
EnsureDtypeKwargs |
key word arguments for [EnsureDtypeDescr][] |
EnvironmentalImpact |
Environmental considerations for model training and deployment. |
Evaluation |
|
FileDescr |
A file description |
FixedZeroMeanUnitVarianceAlongAxisKwargs |
key word arguments for [FixedZeroMeanUnitVarianceDescr][] |
FixedZeroMeanUnitVarianceDescr |
Subtract a given mean and divide by the standard deviation. |
FixedZeroMeanUnitVarianceKwargs |
key word arguments for [FixedZeroMeanUnitVarianceDescr][] |
HttpUrl |
A URL with the HTTP or HTTPS scheme. |
Identifier |
|
IndexAxisBase |
|
IndexInputAxis |
|
IndexOutputAxis |
|
InputTensorDescr |
|
IntervalOrRatioDataDescr |
|
KerasHdf5WeightsDescr |
|
LicenseId |
|
LinkedDataset |
Reference to a bioimage.io dataset. |
LinkedModel |
Reference to a bioimage.io model. |
LinkedResource |
Reference to a bioimage.io resource |
Maintainer |
|
ModelDescr |
Specification of the fields used in a bioimage.io-compliant RDF to describe AI models with pretrained weights. |
ModelId |
|
NominalOrOrdinalDataDescr |
|
OnnxWeightsDescr |
|
OrcidId |
An ORCID identifier, see https://orcid.org/ |
OutputTensorDescr |
|
ParameterizedSize |
Describes a range of valid tensor axis sizes as |
PytorchStateDictWeightsDescr |
|
RelativeFilePath |
A path relative to the |
ReproducibilityTolerance |
Describes what small numerical differences -- if any -- may be tolerated |
ResourceId |
|
RunMode |
|
ScaleLinearAlongAxisKwargs |
Key word arguments for [ScaleLinearDescr][] |
ScaleLinearDescr |
Fixed linear scaling. |
ScaleLinearKwargs |
Key word arguments for [ScaleLinearDescr][] |
ScaleMeanVarianceDescr |
Scale a tensor's data distribution to match another tensor's mean/std. |
ScaleMeanVarianceKwargs |
key word arguments for [ScaleMeanVarianceKwargs][] |
ScaleRangeDescr |
Scale with percentiles. |
ScaleRangeKwargs |
key word arguments for [ScaleRangeDescr][] |
Sha256 |
A SHA-256 hash value |
SiUnit |
An SI unit |
SigmoidDescr |
The logistic sigmoid function, a.k.a. expit function. |
SizeReference |
A tensor axis size (extent in pixels/frames) defined in relation to a reference axis. |
SoftmaxDescr |
The softmax function. |
SoftmaxKwargs |
key word arguments for [SoftmaxDescr][] |
SpaceAxisBase |
|
SpaceInputAxis |
|
SpaceOutputAxis |
|
SpaceOutputAxisWithHalo |
|
TensorDescrBase |
|
TensorId |
|
TensorflowJsWeightsDescr |
|
TensorflowSavedModelBundleWeightsDescr |
|
TimeAxisBase |
|
TimeInputAxis |
|
TimeOutputAxis |
|
TimeOutputAxisWithHalo |
|
TorchscriptWeightsDescr |
|
TrainingDetails |
|
Uploader |
|
Version |
wraps a packaging.version.Version instance for validation in pydantic models |
WeightsDescr |
|
WeightsEntryDescrBase |
|
WithHalo |
|
ZeroMeanUnitVarianceDescr |
Subtract mean and divide by variance. |
ZeroMeanUnitVarianceKwargs |
key word arguments for [ZeroMeanUnitVarianceDescr][] |
Functions:
| Name | Description |
|---|---|
convert_axes |
|
generate_covers |
|
validate_tensors |
|
Attributes:
| Name | Type | Description |
|---|---|---|
ANY_AXIS_TYPES |
intended for isinstance comparisons in py<3.10 |
|
AnyAxis |
|
|
AxisType |
|
|
BATCH_AXIS_ID |
|
|
BioimageioYamlContent |
|
|
FileDescr_dependencies |
|
|
FileDescr_external_data |
|
|
INPUT_AXIS_TYPES |
intended for isinstance comparisons in py<3.10 |
|
IO_AxisT |
|
|
InputAxis |
|
|
IntervalOrRatioDType |
|
|
KnownRunMode |
|
|
NominalOrOrdinalDType |
|
|
NonBatchAxisId |
|
|
NotEmpty |
|
|
OUTPUT_AXIS_TYPES |
intended for isinstance comparisons in py<3.10 |
|
OutputAxis |
|
|
ParameterizedSize_N |
Annotates an integer to calculate a concrete axis size from a |
|
PostprocessingDescr |
|
|
PostprocessingId |
|
|
PreprocessingDescr |
|
|
PreprocessingId |
|
|
SAME_AS_TYPE |
|
|
SpaceUnit |
Space unit compatible to the OME-Zarr axes specification 0.5 |
|
SpecificWeightsDescr |
|
|
TVs |
|
|
TensorDataDescr |
|
|
TensorDescr |
|
|
TimeUnit |
Time unit compatible to the OME-Zarr axes specification 0.5 |
|
VALID_COVER_IMAGE_EXTENSIONS |
|
|
WeightsFormat |
|
ANY_AXIS_TYPES
module-attribute
¤
ANY_AXIS_TYPES = INPUT_AXIS_TYPES + OUTPUT_AXIS_TYPES
intended for isinstance comparisons in py<3.10
BioimageioYamlContent
module-attribute
¤
BioimageioYamlContent = Dict[str, YamlValue]
-
API Reference
spec
FileDescr_dependencies
module-attribute
¤
FileDescr_dependencies = Annotated[
FileDescr_,
WithSuffix((".yaml", ".yml"), case_sensitive=True),
Field(examples=[dict(source="environment.yaml")]),
]
FileDescr_external_data
module-attribute
¤
FileDescr_external_data = Annotated[
FileDescr_,
WithSuffix(".data", case_sensitive=True),
Field(examples=[dict(source="weights.onnx.data")]),
]
INPUT_AXIS_TYPES
module-attribute
¤
INPUT_AXIS_TYPES = (
BatchAxis,
ChannelAxis,
IndexInputAxis,
TimeInputAxis,
SpaceInputAxis,
)
intended for isinstance comparisons in py<3.10
InputAxis
module-attribute
¤
InputAxis = Annotated[
_InputAxisUnion, Discriminator("type")
]
IntervalOrRatioDType
module-attribute
¤
IntervalOrRatioDType = Literal[
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
]
NominalOrOrdinalDType
module-attribute
¤
NominalOrOrdinalDType = Literal[
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool",
]
OUTPUT_AXIS_TYPES
module-attribute
¤
OUTPUT_AXIS_TYPES = (
BatchAxis,
ChannelAxis,
IndexOutputAxis,
TimeOutputAxis,
TimeOutputAxisWithHalo,
SpaceOutputAxis,
SpaceOutputAxisWithHalo,
)
intended for isinstance comparisons in py<3.10
OutputAxis
module-attribute
¤
OutputAxis = Annotated[
_OutputAxisUnion, Discriminator("type")
]
ParameterizedSize_N
module-attribute
¤
ParameterizedSize_N = int
Annotates an integer to calculate a concrete axis size from a ParameterizedSize.
PostprocessingDescr
module-attribute
¤
PostprocessingDescr = Annotated[
Union[
BinarizeDescr,
ClipDescr,
EnsureDtypeDescr,
FixedZeroMeanUnitVarianceDescr,
ScaleLinearDescr,
ScaleMeanVarianceDescr,
ScaleRangeDescr,
SigmoidDescr,
SoftmaxDescr,
ZeroMeanUnitVarianceDescr,
],
Discriminator("id"),
]
PostprocessingId
module-attribute
¤
PostprocessingId = Literal[
"binarize",
"clip",
"ensure_dtype",
"fixed_zero_mean_unit_variance",
"scale_linear",
"scale_mean_variance",
"scale_range",
"sigmoid",
"softmax",
"zero_mean_unit_variance",
]
PreprocessingDescr
module-attribute
¤
PreprocessingDescr = Annotated[
Union[
BinarizeDescr,
ClipDescr,
EnsureDtypeDescr,
FixedZeroMeanUnitVarianceDescr,
ScaleLinearDescr,
ScaleRangeDescr,
SigmoidDescr,
SoftmaxDescr,
ZeroMeanUnitVarianceDescr,
],
Discriminator("id"),
]
PreprocessingId
module-attribute
¤
PreprocessingId = Literal[
"binarize",
"clip",
"ensure_dtype",
"fixed_zero_mean_unit_variance",
"scale_linear",
"scale_range",
"sigmoid",
"softmax",
]
SpaceUnit
module-attribute
¤
SpaceUnit = Literal[
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter",
]
Space unit compatible to the OME-Zarr axes specification 0.5
SpecificWeightsDescr
module-attribute
¤
SpecificWeightsDescr = Union[
KerasHdf5WeightsDescr,
OnnxWeightsDescr,
PytorchStateDictWeightsDescr,
TensorflowJsWeightsDescr,
TensorflowSavedModelBundleWeightsDescr,
TorchscriptWeightsDescr,
]
TVs
module-attribute
¤
TVs = Union[
NotEmpty[List[int]],
NotEmpty[List[float]],
NotEmpty[List[bool]],
NotEmpty[List[str]],
]
TensorDataDescr
module-attribute
¤
TensorDataDescr = Union[
NominalOrOrdinalDataDescr, IntervalOrRatioDataDescr
]
TimeUnit
module-attribute
¤
TimeUnit = Literal[
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond",
]
Time unit compatible to the OME-Zarr axes specification 0.5
VALID_COVER_IMAGE_EXTENSIONS
module-attribute
¤
VALID_COVER_IMAGE_EXTENSIONS = (
".gif",
".jpeg",
".jpg",
".png",
".svg",
)
WeightsFormat
module-attribute
¤
WeightsFormat = Literal[
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript",
]
-
API Reference
specget_resource_package_content
ArchitectureFromFileDescr
pydantic-model
¤
Bases: _ArchitectureCallableDescr, FileDescr
Show JSON schema:
{
"$defs": {
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Architecture source file",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
}
},
"required": [
"source",
"callable"
],
"title": "model.v0_5.ArchitectureFromFileDescr",
"type": "object"
}
Fields:
-
sha256(Optional[Sha256]) -
callable(Annotated[Identifier, Field(examples=['MyNetworkClass', 'get_my_model'])]) -
kwargs(Dict[str, YamlValue]) -
source(Annotated[FileSource, AfterValidator(wo_special_file_name)])
Validators:
-
_validate_sha256
callable
pydantic-field
¤
callable: Annotated[
Identifier,
Field(examples=["MyNetworkClass", "get_my_model"]),
]
Identifier of the callable that returns a torch.nn.Module instance.
source
pydantic-field
¤
source: Annotated[
FileSource, AfterValidator(wo_special_file_name)
]
Architecture source file
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
310 311 312 313 314 315 316 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
302 303 304 305 306 307 308 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | |
ArchitectureFromLibraryDescr
pydantic-model
¤
Bases: _ArchitectureCallableDescr
Show JSON schema:
{
"$defs": {
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
}
},
"additionalProperties": false,
"properties": {
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
},
"import_from": {
"description": "Where to import the callable from, i.e. `from <import_from> import <callable>`",
"title": "Import From",
"type": "string"
}
},
"required": [
"callable",
"import_from"
],
"title": "model.v0_5.ArchitectureFromLibraryDescr",
"type": "object"
}
Fields:
-
callable(Annotated[Identifier, Field(examples=['MyNetworkClass', 'get_my_model'])]) -
kwargs(Dict[str, YamlValue]) -
import_from(str)
callable
pydantic-field
¤
callable: Annotated[
Identifier,
Field(examples=["MyNetworkClass", "get_my_model"]),
]
Identifier of the callable that returns a torch.nn.Module instance.
import_from
pydantic-field
¤
import_from: str
Where to import the callable from, i.e. from <import_from> import <callable>
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
Author
pydantic-model
¤
Bases: _Author_v0_2
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
}
Fields:
-
affiliation(Optional[str]) -
email(Optional[EmailStr]) -
orcid(Annotated[Optional[OrcidId], Field(examples=['0000-0001-2345-6789'])]) -
name(Annotated[str, Predicate(_has_no_slash)]) -
github_user(Optional[str])
Validators:
-
_validate_github_user→github_user
orcid
pydantic-field
¤
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
AxisBase
pydantic-model
¤
Bases: NodeWithExplicitlySetFields
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"id": {
"description": "An axis id unique across all axes of one tensor.",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
}
},
"required": [
"id"
],
"title": "model.v0_5.AxisBase",
"type": "object"
}
Fields:
-
id(AxisId) -
description(Annotated[str, MaxLen(128)])
description
pydantic-field
¤
description: Annotated[str, MaxLen(128)] = ''
A short description of this axis beyond its type and id.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
AxisId
¤
Bases: LowerCaseIdentifier
flowchart TD
bioimageio.spec.model.v0_5.AxisId[AxisId]
bioimageio.spec._internal.types.LowerCaseIdentifier[LowerCaseIdentifier]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.types.LowerCaseIdentifier --> bioimageio.spec.model.v0_5.AxisId
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.types.LowerCaseIdentifier
click bioimageio.spec.model.v0_5.AxisId href "" "bioimageio.spec.model.v0_5.AxisId"
click bioimageio.spec._internal.types.LowerCaseIdentifier href "" "bioimageio.spec._internal.types.LowerCaseIdentifier"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
Methods:
| Name | Description |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
Attributes:
| Name | Type | Description |
|---|---|---|
root_model |
Type[RootModel[Any]]
|
the pydantic root model to validate the string |
root_model
class-attribute
¤
root_model: Type[RootModel[Any]] = RootModel[
Annotated[
LowerCaseIdentifierAnno,
MaxLen(16),
AfterValidator(_normalize_axis_id),
]
]
the pydantic root model to validate the string
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
BadgeDescr
pydantic-model
¤
Bases: Node
A custom badge
Show JSON schema:
{
"$defs": {
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
}
},
"additionalProperties": false,
"description": "A custom badge",
"properties": {
"label": {
"description": "badge label to display on hover",
"examples": [
"Open in Colab"
],
"title": "Label",
"type": "string"
},
"icon": {
"anyOf": [
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "badge icon (included in bioimage.io package if not a URL)",
"examples": [
"https://colab.research.google.com/assets/colab-badge.svg"
],
"title": "Icon"
},
"url": {
"description": "target URL",
"examples": [
"https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb"
],
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
}
},
"required": [
"label",
"url"
],
"title": "generic.v0_2.BadgeDescr",
"type": "object"
}
Fields:
-
label(Annotated[str, Field(examples=[Open in Colab])]) -
icon(Annotated[Optional[Union[Annotated[Union[FilePath, RelativeFilePath], AfterValidator(wo_special_file_name), include_in_package], Union[HttpUrl, pydantic.HttpUrl]]], Field(examples=['https://colab.research.google.com/assets/colab-badge.svg'])]) -
url(Annotated[HttpUrl, Field(examples=['https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb'])])
icon
pydantic-field
¤
icon: Annotated[
Optional[
Union[
Annotated[
Union[FilePath, RelativeFilePath],
AfterValidator(wo_special_file_name),
include_in_package,
],
Union[HttpUrl, pydantic.HttpUrl],
]
],
Field(
examples=[
"https://colab.research.google.com/assets/colab-badge.svg"
]
),
] = None
badge icon (included in bioimage.io package if not a URL)
label
pydantic-field
¤
label: Annotated[str, Field(examples=[Open in Colab])]
badge label to display on hover
url
pydantic-field
¤
url: Annotated[
HttpUrl,
Field(
examples=[
"https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb"
]
),
]
target URL
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
BatchAxis
pydantic-model
¤
Bases: AxisBase
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"id": {
"default": "batch",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "batch",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"const": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
"title": "Size"
}
},
"required": [
"type"
],
"title": "model.v0_5.BatchAxis",
"type": "object"
}
Fields:
-
description(Annotated[str, MaxLen(128)]) -
type(Literal['batch']) -
id(Annotated[AxisId, Predicate(_is_batch)]) -
size(Optional[Literal[1]])
description
pydantic-field
¤
description: Annotated[str, MaxLen(128)] = ''
A short description of this axis beyond its type and id.
id
pydantic-field
¤
id: Annotated[AxisId, Predicate(_is_batch)] = BATCH_AXIS_ID
An axis id unique across all axes of one tensor.
size
pydantic-field
¤
size: Optional[Literal[1]] = None
The batch size may be fixed to 1, otherwise (the default) it may be chosen arbitrarily depending on available memory
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
BiasRisksLimitations
pydantic-model
¤
Bases: Node
Known biases, risks, technical limitations, and recommendations for model use.
Show JSON schema:
{
"additionalProperties": true,
"description": "Known biases, risks, technical limitations, and recommendations for model use.",
"properties": {
"known_biases": {
"default": "In general bioimage models may suffer from biases caused by:\n\n- Imaging protocol dependencies\n- Use of a specific cell type\n- Species-specific training data limitations\n\n",
"description": "Biases in training data or model behavior.",
"title": "Known Biases",
"type": "string"
},
"risks": {
"default": "Common risks in bioimage analysis include:\n\n- Erroneously assuming generalization to unseen experimental conditions\n- Trusting (overconfident) model outputs without validation\n- Misinterpretation of results\n\n",
"description": "Potential risks in the context of bioimage analysis.",
"title": "Risks",
"type": "string"
},
"limitations": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Technical limitations and failure modes.",
"title": "Limitations"
},
"recommendations": {
"default": "Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"description": "Mitigation strategies regarding `known_biases`, `risks`, and `limitations`, as well as applicable best practices.\n\nConsider:\n- How to use a validation dataset?\n- How to manually validate?\n- Feasibility of domain adaptation for different experimental setups?",
"title": "Recommendations",
"type": "string"
}
},
"title": "model.v0_5.BiasRisksLimitations",
"type": "object"
}
Fields:
-
known_biases(str) -
risks(str) -
limitations(Optional[str]) -
recommendations(str)
limitations
pydantic-field
¤
limitations: Optional[str] = None
Technical limitations and failure modes.
recommendations
pydantic-field
¤
recommendations: str = "Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model."
Mitigation strategies regarding known_biases, risks, and limitations, as well as applicable best practices.
Consider: - How to use a validation dataset? - How to manually validate? - Feasibility of domain adaptation for different experimental setups?
format_md
¤
format_md() -> str
Source code in src/bioimageio/spec/model/v0_5.py
2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
BinarizeAlongAxisKwargs
pydantic-model
¤
Bases: KwargsNode
key word arguments for BinarizeDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for [BinarizeDescr][]",
"properties": {
"threshold": {
"description": "The fixed threshold values along `axis`",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Threshold",
"type": "array"
},
"axis": {
"description": "The `threshold` axis",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"threshold",
"axis"
],
"title": "model.v0_5.BinarizeAlongAxisKwargs",
"type": "object"
}
Fields:
-
threshold(NotEmpty[List[float]]) -
axis(Annotated[NonBatchAxisId, Field(examples=['channel'])])
axis
pydantic-field
¤
axis: Annotated[NonBatchAxisId, Field(examples=["channel"])]
The threshold axis
threshold
pydantic-field
¤
threshold: NotEmpty[List[float]]
The fixed threshold values along axis
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
426 427 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
420 421 422 423 424 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
417 418 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
BinarizeDescr
pydantic-model
¤
Bases: NodeWithExplicitlySetFields
Binarize the tensor with a fixed threshold.
Values above BinarizeKwargs.threshold/BinarizeAlongAxisKwargs.threshold will be set to one, values below the threshold to zero.
Examples:
- in YAML
postprocessing: - id: binarize kwargs: axis: 'channel' threshold: [0.25, 0.5, 0.75] - in Python: >>> postprocessing = [BinarizeDescr( ... kwargs=BinarizeAlongAxisKwargs( ... axis=AxisId('channel'), ... threshold=[0.25, 0.5, 0.75], ... ) ... )]
Show JSON schema:
{
"$defs": {
"BinarizeAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for [BinarizeDescr][]",
"properties": {
"threshold": {
"description": "The fixed threshold values along `axis`",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Threshold",
"type": "array"
},
"axis": {
"description": "The `threshold` axis",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"threshold",
"axis"
],
"title": "model.v0_5.BinarizeAlongAxisKwargs",
"type": "object"
},
"BinarizeKwargs": {
"additionalProperties": false,
"description": "key word arguments for [BinarizeDescr][]",
"properties": {
"threshold": {
"description": "The fixed threshold",
"title": "Threshold",
"type": "number"
}
},
"required": [
"threshold"
],
"title": "model.v0_5.BinarizeKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Binarize the tensor with a fixed threshold.\n\nValues above [BinarizeKwargs.threshold][]/[BinarizeAlongAxisKwargs.threshold][]\nwill be set to one, values below the threshold to zero.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: binarize\n kwargs:\n axis: 'channel'\n threshold: [0.25, 0.5, 0.75]\n ```\n- in Python:\n >>> postprocessing = [BinarizeDescr(\n ... kwargs=BinarizeAlongAxisKwargs(\n ... axis=AxisId('channel'),\n ... threshold=[0.25, 0.5, 0.75],\n ... )\n ... )]",
"properties": {
"id": {
"const": "binarize",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/BinarizeKwargs"
},
{
"$ref": "#/$defs/BinarizeAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.BinarizeDescr",
"type": "object"
}
Fields:
-
id(Literal['binarize']) -
kwargs(Union[BinarizeKwargs, BinarizeAlongAxisKwargs])
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
BinarizeKwargs
pydantic-model
¤
Bases: KwargsNode
key word arguments for BinarizeDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for [BinarizeDescr][]",
"properties": {
"threshold": {
"description": "The fixed threshold",
"title": "Threshold",
"type": "number"
}
},
"required": [
"threshold"
],
"title": "model.v0_5.BinarizeKwargs",
"type": "object"
}
Fields:
-
threshold(float)
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
426 427 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
420 421 422 423 424 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
417 418 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
BioimageioConfig
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"$defs": {
"BiasRisksLimitations": {
"additionalProperties": true,
"description": "Known biases, risks, technical limitations, and recommendations for model use.",
"properties": {
"known_biases": {
"default": "In general bioimage models may suffer from biases caused by:\n\n- Imaging protocol dependencies\n- Use of a specific cell type\n- Species-specific training data limitations\n\n",
"description": "Biases in training data or model behavior.",
"title": "Known Biases",
"type": "string"
},
"risks": {
"default": "Common risks in bioimage analysis include:\n\n- Erroneously assuming generalization to unseen experimental conditions\n- Trusting (overconfident) model outputs without validation\n- Misinterpretation of results\n\n",
"description": "Potential risks in the context of bioimage analysis.",
"title": "Risks",
"type": "string"
},
"limitations": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Technical limitations and failure modes.",
"title": "Limitations"
},
"recommendations": {
"default": "Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"description": "Mitigation strategies regarding `known_biases`, `risks`, and `limitations`, as well as applicable best practices.\n\nConsider:\n- How to use a validation dataset?\n- How to manually validate?\n- Feasibility of domain adaptation for different experimental setups?",
"title": "Recommendations",
"type": "string"
}
},
"title": "model.v0_5.BiasRisksLimitations",
"type": "object"
},
"EnvironmentalImpact": {
"additionalProperties": true,
"description": "Environmental considerations for model training and deployment.\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
"properties": {
"hardware_type": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "GPU/CPU specifications",
"title": "Hardware Type"
},
"hours_used": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Total compute hours",
"title": "Hours Used"
},
"cloud_provider": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "If applicable",
"title": "Cloud Provider"
},
"compute_region": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Geographic location",
"title": "Compute Region"
},
"co2_emitted": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "kg CO2 equivalent\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
"title": "Co2 Emitted"
}
},
"title": "model.v0_5.EnvironmentalImpact",
"type": "object"
},
"Evaluation": {
"additionalProperties": true,
"properties": {
"model_id": {
"anyOf": [
{
"minLength": 1,
"title": "ModelId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Model being evaluated.",
"title": "Model Id"
},
"dataset_id": {
"description": "Dataset used for evaluation.",
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
"dataset_source": {
"description": "Source of the dataset.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
"dataset_role": {
"description": "Role of the dataset used for evaluation.\n\n- `train`: dataset was (part of) the training data\n- `validation`: dataset was (part of) the validation data used during training, e.g. used for model selection or hyperparameter tuning\n- `test`: dataset was (part of) the designated test data; not used during training or validation, but acquired from the same source/distribution as training data\n- `independent`: dataset is entirely independent test data; not used during training or validation, and acquired from a different source/distribution than training data\n- `unknown`: role of the dataset is unknown; choose this if you are not certain if (a subset) of the data was seen by the model during training.",
"enum": [
"train",
"validation",
"test",
"independent",
"unknown"
],
"title": "Dataset Role",
"type": "string"
},
"sample_count": {
"description": "Number of evaluated samples.",
"title": "Sample Count",
"type": "integer"
},
"evaluation_factors": {
"description": "(Abbreviations of) each evaluation factor.\n\nEvaluation factors are criteria along which model performance is evaluated, e.g. different image conditions\nlike 'low SNR', 'high cell density', or different biological conditions like 'cell type A', 'cell type B'.\nAn 'overall' factor may be included to summarize performance across all conditions.",
"items": {
"maxLength": 16,
"type": "string"
},
"title": "Evaluation Factors",
"type": "array"
},
"evaluation_factors_long": {
"description": "Descriptions (long form) of each evaluation factor.",
"items": {
"type": "string"
},
"title": "Evaluation Factors Long",
"type": "array"
},
"metrics": {
"description": "(Abbreviations of) metrics used for evaluation.",
"items": {
"maxLength": 16,
"type": "string"
},
"title": "Metrics",
"type": "array"
},
"metrics_long": {
"description": "Description of each metric used.",
"items": {
"type": "string"
},
"title": "Metrics Long",
"type": "array"
},
"results": {
"description": "Results for each metric (rows; outer list) and each evaluation factor (columns; inner list).",
"items": {
"items": {
"anyOf": [
{
"type": "string"
},
{
"type": "number"
},
{
"type": "integer"
}
]
},
"type": "array"
},
"title": "Results",
"type": "array"
},
"results_summary": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Interpretation of results for general audience.\n\nConsider:\n - Overall model performance\n - Comparison to existing methods\n - Limitations and areas for improvement",
"title": "Results Summary"
}
},
"required": [
"dataset_id",
"dataset_source",
"dataset_role",
"sample_count",
"evaluation_factors",
"evaluation_factors_long",
"metrics",
"metrics_long",
"results"
],
"title": "model.v0_5.Evaluation",
"type": "object"
},
"ReproducibilityTolerance": {
"additionalProperties": true,
"description": "Describes what small numerical differences -- if any -- may be tolerated\nin the generated output when executing in different environments.\n\nA tensor element *output* is considered mismatched to the **test_tensor** if\nabs(*output* - **test_tensor**) > **absolute_tolerance** + **relative_tolerance** * abs(**test_tensor**).\n(Internally we call [numpy.testing.assert_allclose](https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html).)\n\nMotivation:\n For testing we can request the respective deep learning frameworks to be as\n reproducible as possible by setting seeds and chosing deterministic algorithms,\n but differences in operating systems, available hardware and installed drivers\n may still lead to numerical differences.",
"properties": {
"relative_tolerance": {
"default": 0.001,
"description": "Maximum relative tolerance of reproduced test tensor.",
"maximum": 0.01,
"minimum": 0,
"title": "Relative Tolerance",
"type": "number"
},
"absolute_tolerance": {
"default": 0.001,
"description": "Maximum absolute tolerance of reproduced test tensor.",
"minimum": 0,
"title": "Absolute Tolerance",
"type": "number"
},
"mismatched_elements_per_million": {
"default": 100,
"description": "Maximum number of mismatched elements/pixels per million to tolerate.",
"maximum": 1000,
"minimum": 0,
"title": "Mismatched Elements Per Million",
"type": "integer"
},
"output_ids": {
"default": [],
"description": "Limits the output tensor IDs these reproducibility details apply to.",
"items": {
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"title": "Output Ids",
"type": "array"
},
"weights_formats": {
"default": [],
"description": "Limits the weights formats these details apply to.",
"items": {
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
"title": "Weights Formats",
"type": "array"
}
},
"title": "model.v0_5.ReproducibilityTolerance",
"type": "object"
},
"TrainingDetails": {
"additionalProperties": true,
"properties": {
"training_preprocessing": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Detailed image preprocessing steps during model training:\n\nMention:\n- *Normalization methods*\n- *Augmentation strategies*\n- *Resizing/resampling procedures*\n- *Artifact handling*",
"title": "Training Preprocessing"
},
"training_epochs": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Number of training epochs.",
"title": "Training Epochs"
},
"training_batch_size": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Batch size used in training.",
"title": "Training Batch Size"
},
"initial_learning_rate": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Initial learning rate used in training.",
"title": "Initial Learning Rate"
},
"learning_rate_schedule": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Learning rate schedule used in training.",
"title": "Learning Rate Schedule"
},
"loss_function": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Loss function used in training, e.g. nn.MSELoss.",
"title": "Loss Function"
},
"loss_function_kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `loss_function`",
"title": "Loss Function Kwargs",
"type": "object"
},
"optimizer": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "optimizer, e.g. torch.optim.Adam",
"title": "Optimizer"
},
"optimizer_kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `optimizer`",
"title": "Optimizer Kwargs",
"type": "object"
},
"regularization": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Regularization techniques used during training, e.g. drop-out or weight decay.",
"title": "Regularization"
},
"training_duration": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Total training duration in hours.",
"title": "Training Duration"
}
},
"title": "model.v0_5.TrainingDetails",
"type": "object"
},
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
}
},
"additionalProperties": true,
"properties": {
"reproducibility_tolerance": {
"default": [],
"description": "Tolerances to allow when reproducing the model's test outputs\nfrom the model's test inputs.\nOnly the first entry matching tensor id and weights format is considered.",
"items": {
"$ref": "#/$defs/ReproducibilityTolerance"
},
"title": "Reproducibility Tolerance",
"type": "array"
},
"funded_by": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Funding agency, grant number if applicable",
"title": "Funded By"
},
"architecture_type": {
"anyOf": [
{
"maxLength": 32,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Model architecture type, e.g., 3D U-Net, ResNet, transformer",
"title": "Architecture Type"
},
"architecture_description": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Text description of model architecture.",
"title": "Architecture Description"
},
"modality": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Input modality, e.g., fluorescence microscopy, electron microscopy",
"title": "Modality"
},
"target_structure": {
"description": "Biological structure(s) the model is designed to analyze, e.g., nuclei, mitochondria, cells",
"items": {
"type": "string"
},
"title": "Target Structure",
"type": "array"
},
"task": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Bioimage-specific task type, e.g., segmentation, classification, detection, denoising",
"title": "Task"
},
"new_version": {
"anyOf": [
{
"minLength": 1,
"title": "ModelId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A new version of this model exists with a different model id.",
"title": "New Version"
},
"out_of_scope_use": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Describe how the model may be misused in bioimage analysis contexts and what users should **not** do with the model.",
"title": "Out Of Scope Use"
},
"bias_risks_limitations": {
"$ref": "#/$defs/BiasRisksLimitations",
"description": "Description of known bias, risks, and technical limitations for in-scope model use."
},
"model_parameter_count": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "Total number of model parameters.",
"title": "Model Parameter Count"
},
"training": {
"$ref": "#/$defs/TrainingDetails",
"description": "Details on how the model was trained."
},
"inference_time": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Average inference time per image/tile. Specify hardware and image size. Multiple examples can be given.",
"title": "Inference Time"
},
"memory_requirements_inference": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "GPU memory needed for inference. Multiple examples with different image size can be given.",
"title": "Memory Requirements Inference"
},
"memory_requirements_training": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "GPU memory needed for training. Multiple examples with different image/batch sizes can be given.",
"title": "Memory Requirements Training"
},
"evaluations": {
"description": "Quantitative model evaluations.\n\nNote:\n At the moment we recommend to include only a single test dataset\n (with evaluation factors that may mark subsets of the dataset)\n to avoid confusion and make the presentation of results cleaner.",
"items": {
"$ref": "#/$defs/Evaluation"
},
"title": "Evaluations",
"type": "array"
},
"environmental_impact": {
"$ref": "#/$defs/EnvironmentalImpact",
"description": "Environmental considerations for model training and deployment"
}
},
"title": "model.v0_5.BioimageioConfig",
"type": "object"
}
Fields:
-
reproducibility_tolerance(Sequence[ReproducibilityTolerance]) -
funded_by(Optional[str]) -
architecture_type(Optional[Annotated[str, MaxLen(32)]]) -
architecture_description(Optional[str]) -
modality(Optional[str]) -
target_structure(List[str]) -
task(Optional[str]) -
new_version(Optional[ModelId]) -
out_of_scope_use(Optional[str]) -
bias_risks_limitations(BiasRisksLimitations) -
model_parameter_count(Optional[int]) -
training(TrainingDetails) -
inference_time(Optional[str]) -
memory_requirements_inference(Optional[str]) -
memory_requirements_training(Optional[str]) -
evaluations(List[Evaluation]) -
environmental_impact(EnvironmentalImpact)
architecture_description
pydantic-field
¤
architecture_description: Optional[str] = None
Text description of model architecture.
architecture_type
pydantic-field
¤
architecture_type: Optional[Annotated[str, MaxLen(32)]] = (
None
)
Model architecture type, e.g., 3D U-Net, ResNet, transformer
bias_risks_limitations
pydantic-field
¤
bias_risks_limitations: BiasRisksLimitations
Description of known bias, risks, and technical limitations for in-scope model use.
environmental_impact
pydantic-field
¤
environmental_impact: EnvironmentalImpact
Environmental considerations for model training and deployment
evaluations
pydantic-field
¤
evaluations: List[Evaluation]
Quantitative model evaluations.
Note
At the moment we recommend to include only a single test dataset (with evaluation factors that may mark subsets of the dataset) to avoid confusion and make the presentation of results cleaner.
funded_by
pydantic-field
¤
funded_by: Optional[str] = None
Funding agency, grant number if applicable
inference_time
pydantic-field
¤
inference_time: Optional[str] = None
Average inference time per image/tile. Specify hardware and image size. Multiple examples can be given.
memory_requirements_inference
pydantic-field
¤
memory_requirements_inference: Optional[str] = None
GPU memory needed for inference. Multiple examples with different image size can be given.
memory_requirements_training
pydantic-field
¤
memory_requirements_training: Optional[str] = None
GPU memory needed for training. Multiple examples with different image/batch sizes can be given.
modality
pydantic-field
¤
modality: Optional[str] = None
Input modality, e.g., fluorescence microscopy, electron microscopy
model_parameter_count
pydantic-field
¤
model_parameter_count: Optional[int] = None
Total number of model parameters.
new_version
pydantic-field
¤
new_version: Optional[ModelId] = None
A new version of this model exists with a different model id.
out_of_scope_use
pydantic-field
¤
out_of_scope_use: Optional[str] = None
Describe how the model may be misused in bioimage analysis contexts and what users should not do with the model.
reproducibility_tolerance
pydantic-field
¤
reproducibility_tolerance: Sequence[
ReproducibilityTolerance
] = ()
Tolerances to allow when reproducing the model's test outputs from the model's test inputs. Only the first entry matching tensor id and weights format is considered.
target_structure
pydantic-field
¤
target_structure: List[str]
Biological structure(s) the model is designed to analyze, e.g., nuclei, mitochondria, cells
task
pydantic-field
¤
task: Optional[str] = None
Bioimage-specific task type, e.g., segmentation, classification, detection, denoising
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
CallableFromDepencency
¤
Bases: ValidatedStringWithInnerNode[CallableFromDepencencyNode]
flowchart TD
bioimageio.spec.model.v0_5.CallableFromDepencency[CallableFromDepencency]
bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode[ValidatedStringWithInnerNode]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode --> bioimageio.spec.model.v0_5.CallableFromDepencency
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode
click bioimageio.spec.model.v0_5.CallableFromDepencency href "" "bioimageio.spec.model.v0_5.CallableFromDepencency"
click bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode href "" "bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
Methods:
| Name | Description |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
Attributes:
| Name | Type | Description |
|---|---|---|
callable_name |
The callable Python identifier implemented in module module_name. |
|
module_name |
The Python module that implements callable_name. |
|
root_model |
Type[RootModel[Any]]
|
the pydantic root model to validate the string |
callable_name
property
¤
callable_name
The callable Python identifier implemented in module module_name.
root_model
class-attribute
¤
root_model: Type[RootModel[Any]] = RootModel[
Annotated[
str,
StringConstraints(
strip_whitespace=True, pattern="^.+\\..+$"
),
]
]
the pydantic root model to validate the string
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
ChannelAxis
pydantic-model
¤
Bases: AxisBase
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"id": {
"default": "channel",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "channel",
"title": "Type",
"type": "string"
},
"channel_names": {
"items": {
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"minItems": 1,
"title": "Channel Names",
"type": "array"
}
},
"required": [
"type",
"channel_names"
],
"title": "model.v0_5.ChannelAxis",
"type": "object"
}
Fields:
-
description(Annotated[str, MaxLen(128)]) -
type(Literal['channel']) -
id(NonBatchAxisId) -
channel_names(NotEmpty[List[Identifier]])
description
pydantic-field
¤
description: Annotated[str, MaxLen(128)] = ''
A short description of this axis beyond its type and id.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
CiteEntry
pydantic-model
¤
Bases: Node
A citation that should be referenced in work using this resource.
Show JSON schema:
{
"additionalProperties": false,
"description": "A citation that should be referenced in work using this resource.",
"properties": {
"text": {
"description": "free text description",
"title": "Text",
"type": "string"
},
"doi": {
"anyOf": [
{
"description": "A digital object identifier, see https://www.doi.org/",
"pattern": "^10\\.[0-9]{4}.+$",
"title": "Doi",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details.\nNote:\n Either **doi** or **url** have to be specified.",
"title": "Doi"
},
"url": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to cite (preferably specify a **doi** instead/also).\nNote:\n Either **doi** or **url** have to be specified.",
"title": "Url"
}
},
"required": [
"text"
],
"title": "generic.v0_3.CiteEntry",
"type": "object"
}
Fields:
Validators:
-
_check_doi_or_url
doi
pydantic-field
¤
doi: Optional[Doi] = None
A digital object identifier (DOI) is the prefered citation reference. See https://www.doi.org/ for details. Note: Either doi or url have to be specified.
url
pydantic-field
¤
url: Optional[HttpUrl] = None
URL to cite (preferably specify a doi instead/also). Note: Either doi or url have to be specified.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
ClipDescr
pydantic-model
¤
Bases: NodeWithExplicitlySetFields
Set tensor values below min to min and above max to max.
See ScaleRangeDescr for examples.
Show JSON schema:
{
"$defs": {
"ClipKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ClipDescr][]",
"properties": {
"min": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Minimum value for clipping.\n\nExclusive with [min_percentile][]",
"title": "Min"
},
"min_percentile": {
"anyOf": [
{
"exclusiveMaximum": 100,
"minimum": 0,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Minimum percentile for clipping.\n\nExclusive with [min][].\n\nIn range [0, 100).",
"title": "Min Percentile"
},
"max": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Maximum value for clipping.\n\nExclusive with `max_percentile`.",
"title": "Max"
},
"max_percentile": {
"anyOf": [
{
"exclusiveMinimum": 1,
"maximum": 100,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Maximum percentile for clipping.\n\nExclusive with `max`.\n\nIn range (1, 100].",
"title": "Max Percentile"
},
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to determine percentiles jointly,\n\ni.e. axes to reduce to compute min/max from `min_percentile`/`max_percentile`.\nFor example to clip 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape with clipped values per channel, specify `axes=('batch', 'x', 'y')`.\nTo clip samples independently, leave out the 'batch' axis.\n\nOnly valid if `min_percentile` and/or `max_percentile` are set.\n\nDefault: Compute percentiles over all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
}
},
"title": "model.v0_5.ClipKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Set tensor values below min to min and above max to max.\n\nSee `ScaleRangeDescr` for examples.",
"properties": {
"id": {
"const": "clip",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ClipKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ClipDescr",
"type": "object"
}
Fields:
-
id(Literal['clip']) -
kwargs(ClipKwargs)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
ClipKwargs
pydantic-model
¤
Bases: KwargsNode
key word arguments for ClipDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for [ClipDescr][]",
"properties": {
"min": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Minimum value for clipping.\n\nExclusive with [min_percentile][]",
"title": "Min"
},
"min_percentile": {
"anyOf": [
{
"exclusiveMaximum": 100,
"minimum": 0,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Minimum percentile for clipping.\n\nExclusive with [min][].\n\nIn range [0, 100).",
"title": "Min Percentile"
},
"max": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Maximum value for clipping.\n\nExclusive with `max_percentile`.",
"title": "Max"
},
"max_percentile": {
"anyOf": [
{
"exclusiveMinimum": 1,
"maximum": 100,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Maximum percentile for clipping.\n\nExclusive with `max`.\n\nIn range (1, 100].",
"title": "Max Percentile"
},
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to determine percentiles jointly,\n\ni.e. axes to reduce to compute min/max from `min_percentile`/`max_percentile`.\nFor example to clip 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape with clipped values per channel, specify `axes=('batch', 'x', 'y')`.\nTo clip samples independently, leave out the 'batch' axis.\n\nOnly valid if `min_percentile` and/or `max_percentile` are set.\n\nDefault: Compute percentiles over all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
}
},
"title": "model.v0_5.ClipKwargs",
"type": "object"
}
Fields:
-
min(Optional[float]) -
min_percentile(Optional[Annotated[float, Interval(ge=0, lt=100)]]) -
max(Optional[float]) -
max_percentile(Optional[Annotated[float, Interval(gt=1, le=100)]]) -
axes(Annotated[Optional[Sequence[AxisId]], Field(examples=[('batch', 'x', 'y')])])
Validators:
-
_validate
axes
pydantic-field
¤
The subset of axes to determine percentiles jointly,
i.e. axes to reduce to compute min/max from min_percentile/max_percentile.
For example to clip 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')
resulting in a tensor of equal shape with clipped values per channel, specify axes=('batch', 'x', 'y').
To clip samples independently, leave out the 'batch' axis.
Only valid if min_percentile and/or max_percentile are set.
Default: Compute percentiles over all axes jointly.
max
pydantic-field
¤
max: Optional[float] = None
Maximum value for clipping.
Exclusive with max_percentile.
max_percentile
pydantic-field
¤
max_percentile: Optional[
Annotated[float, Interval(gt=1, le=100)]
] = None
Maximum percentile for clipping.
Exclusive with max.
In range (1, 100].
min
pydantic-field
¤
min: Optional[float] = None
Minimum value for clipping.
Exclusive with min_percentile
min_percentile
pydantic-field
¤
min_percentile: Optional[
Annotated[float, Interval(ge=0, lt=100)]
] = None
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
426 427 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
420 421 422 423 424 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
417 418 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
Config
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"$defs": {
"BiasRisksLimitations": {
"additionalProperties": true,
"description": "Known biases, risks, technical limitations, and recommendations for model use.",
"properties": {
"known_biases": {
"default": "In general bioimage models may suffer from biases caused by:\n\n- Imaging protocol dependencies\n- Use of a specific cell type\n- Species-specific training data limitations\n\n",
"description": "Biases in training data or model behavior.",
"title": "Known Biases",
"type": "string"
},
"risks": {
"default": "Common risks in bioimage analysis include:\n\n- Erroneously assuming generalization to unseen experimental conditions\n- Trusting (overconfident) model outputs without validation\n- Misinterpretation of results\n\n",
"description": "Potential risks in the context of bioimage analysis.",
"title": "Risks",
"type": "string"
},
"limitations": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Technical limitations and failure modes.",
"title": "Limitations"
},
"recommendations": {
"default": "Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"description": "Mitigation strategies regarding `known_biases`, `risks`, and `limitations`, as well as applicable best practices.\n\nConsider:\n- How to use a validation dataset?\n- How to manually validate?\n- Feasibility of domain adaptation for different experimental setups?",
"title": "Recommendations",
"type": "string"
}
},
"title": "model.v0_5.BiasRisksLimitations",
"type": "object"
},
"BioimageioConfig": {
"additionalProperties": true,
"properties": {
"reproducibility_tolerance": {
"default": [],
"description": "Tolerances to allow when reproducing the model's test outputs\nfrom the model's test inputs.\nOnly the first entry matching tensor id and weights format is considered.",
"items": {
"$ref": "#/$defs/ReproducibilityTolerance"
},
"title": "Reproducibility Tolerance",
"type": "array"
},
"funded_by": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Funding agency, grant number if applicable",
"title": "Funded By"
},
"architecture_type": {
"anyOf": [
{
"maxLength": 32,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Model architecture type, e.g., 3D U-Net, ResNet, transformer",
"title": "Architecture Type"
},
"architecture_description": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Text description of model architecture.",
"title": "Architecture Description"
},
"modality": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Input modality, e.g., fluorescence microscopy, electron microscopy",
"title": "Modality"
},
"target_structure": {
"description": "Biological structure(s) the model is designed to analyze, e.g., nuclei, mitochondria, cells",
"items": {
"type": "string"
},
"title": "Target Structure",
"type": "array"
},
"task": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Bioimage-specific task type, e.g., segmentation, classification, detection, denoising",
"title": "Task"
},
"new_version": {
"anyOf": [
{
"minLength": 1,
"title": "ModelId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A new version of this model exists with a different model id.",
"title": "New Version"
},
"out_of_scope_use": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Describe how the model may be misused in bioimage analysis contexts and what users should **not** do with the model.",
"title": "Out Of Scope Use"
},
"bias_risks_limitations": {
"$ref": "#/$defs/BiasRisksLimitations",
"description": "Description of known bias, risks, and technical limitations for in-scope model use."
},
"model_parameter_count": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "Total number of model parameters.",
"title": "Model Parameter Count"
},
"training": {
"$ref": "#/$defs/TrainingDetails",
"description": "Details on how the model was trained."
},
"inference_time": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Average inference time per image/tile. Specify hardware and image size. Multiple examples can be given.",
"title": "Inference Time"
},
"memory_requirements_inference": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "GPU memory needed for inference. Multiple examples with different image size can be given.",
"title": "Memory Requirements Inference"
},
"memory_requirements_training": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "GPU memory needed for training. Multiple examples with different image/batch sizes can be given.",
"title": "Memory Requirements Training"
},
"evaluations": {
"description": "Quantitative model evaluations.\n\nNote:\n At the moment we recommend to include only a single test dataset\n (with evaluation factors that may mark subsets of the dataset)\n to avoid confusion and make the presentation of results cleaner.",
"items": {
"$ref": "#/$defs/Evaluation"
},
"title": "Evaluations",
"type": "array"
},
"environmental_impact": {
"$ref": "#/$defs/EnvironmentalImpact",
"description": "Environmental considerations for model training and deployment"
}
},
"title": "model.v0_5.BioimageioConfig",
"type": "object"
},
"EnvironmentalImpact": {
"additionalProperties": true,
"description": "Environmental considerations for model training and deployment.\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
"properties": {
"hardware_type": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "GPU/CPU specifications",
"title": "Hardware Type"
},
"hours_used": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Total compute hours",
"title": "Hours Used"
},
"cloud_provider": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "If applicable",
"title": "Cloud Provider"
},
"compute_region": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Geographic location",
"title": "Compute Region"
},
"co2_emitted": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "kg CO2 equivalent\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
"title": "Co2 Emitted"
}
},
"title": "model.v0_5.EnvironmentalImpact",
"type": "object"
},
"Evaluation": {
"additionalProperties": true,
"properties": {
"model_id": {
"anyOf": [
{
"minLength": 1,
"title": "ModelId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Model being evaluated.",
"title": "Model Id"
},
"dataset_id": {
"description": "Dataset used for evaluation.",
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
"dataset_source": {
"description": "Source of the dataset.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
"dataset_role": {
"description": "Role of the dataset used for evaluation.\n\n- `train`: dataset was (part of) the training data\n- `validation`: dataset was (part of) the validation data used during training, e.g. used for model selection or hyperparameter tuning\n- `test`: dataset was (part of) the designated test data; not used during training or validation, but acquired from the same source/distribution as training data\n- `independent`: dataset is entirely independent test data; not used during training or validation, and acquired from a different source/distribution than training data\n- `unknown`: role of the dataset is unknown; choose this if you are not certain if (a subset) of the data was seen by the model during training.",
"enum": [
"train",
"validation",
"test",
"independent",
"unknown"
],
"title": "Dataset Role",
"type": "string"
},
"sample_count": {
"description": "Number of evaluated samples.",
"title": "Sample Count",
"type": "integer"
},
"evaluation_factors": {
"description": "(Abbreviations of) each evaluation factor.\n\nEvaluation factors are criteria along which model performance is evaluated, e.g. different image conditions\nlike 'low SNR', 'high cell density', or different biological conditions like 'cell type A', 'cell type B'.\nAn 'overall' factor may be included to summarize performance across all conditions.",
"items": {
"maxLength": 16,
"type": "string"
},
"title": "Evaluation Factors",
"type": "array"
},
"evaluation_factors_long": {
"description": "Descriptions (long form) of each evaluation factor.",
"items": {
"type": "string"
},
"title": "Evaluation Factors Long",
"type": "array"
},
"metrics": {
"description": "(Abbreviations of) metrics used for evaluation.",
"items": {
"maxLength": 16,
"type": "string"
},
"title": "Metrics",
"type": "array"
},
"metrics_long": {
"description": "Description of each metric used.",
"items": {
"type": "string"
},
"title": "Metrics Long",
"type": "array"
},
"results": {
"description": "Results for each metric (rows; outer list) and each evaluation factor (columns; inner list).",
"items": {
"items": {
"anyOf": [
{
"type": "string"
},
{
"type": "number"
},
{
"type": "integer"
}
]
},
"type": "array"
},
"title": "Results",
"type": "array"
},
"results_summary": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Interpretation of results for general audience.\n\nConsider:\n - Overall model performance\n - Comparison to existing methods\n - Limitations and areas for improvement",
"title": "Results Summary"
}
},
"required": [
"dataset_id",
"dataset_source",
"dataset_role",
"sample_count",
"evaluation_factors",
"evaluation_factors_long",
"metrics",
"metrics_long",
"results"
],
"title": "model.v0_5.Evaluation",
"type": "object"
},
"ReproducibilityTolerance": {
"additionalProperties": true,
"description": "Describes what small numerical differences -- if any -- may be tolerated\nin the generated output when executing in different environments.\n\nA tensor element *output* is considered mismatched to the **test_tensor** if\nabs(*output* - **test_tensor**) > **absolute_tolerance** + **relative_tolerance** * abs(**test_tensor**).\n(Internally we call [numpy.testing.assert_allclose](https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html).)\n\nMotivation:\n For testing we can request the respective deep learning frameworks to be as\n reproducible as possible by setting seeds and chosing deterministic algorithms,\n but differences in operating systems, available hardware and installed drivers\n may still lead to numerical differences.",
"properties": {
"relative_tolerance": {
"default": 0.001,
"description": "Maximum relative tolerance of reproduced test tensor.",
"maximum": 0.01,
"minimum": 0,
"title": "Relative Tolerance",
"type": "number"
},
"absolute_tolerance": {
"default": 0.001,
"description": "Maximum absolute tolerance of reproduced test tensor.",
"minimum": 0,
"title": "Absolute Tolerance",
"type": "number"
},
"mismatched_elements_per_million": {
"default": 100,
"description": "Maximum number of mismatched elements/pixels per million to tolerate.",
"maximum": 1000,
"minimum": 0,
"title": "Mismatched Elements Per Million",
"type": "integer"
},
"output_ids": {
"default": [],
"description": "Limits the output tensor IDs these reproducibility details apply to.",
"items": {
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"title": "Output Ids",
"type": "array"
},
"weights_formats": {
"default": [],
"description": "Limits the weights formats these details apply to.",
"items": {
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
"title": "Weights Formats",
"type": "array"
}
},
"title": "model.v0_5.ReproducibilityTolerance",
"type": "object"
},
"TrainingDetails": {
"additionalProperties": true,
"properties": {
"training_preprocessing": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Detailed image preprocessing steps during model training:\n\nMention:\n- *Normalization methods*\n- *Augmentation strategies*\n- *Resizing/resampling procedures*\n- *Artifact handling*",
"title": "Training Preprocessing"
},
"training_epochs": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Number of training epochs.",
"title": "Training Epochs"
},
"training_batch_size": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Batch size used in training.",
"title": "Training Batch Size"
},
"initial_learning_rate": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Initial learning rate used in training.",
"title": "Initial Learning Rate"
},
"learning_rate_schedule": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Learning rate schedule used in training.",
"title": "Learning Rate Schedule"
},
"loss_function": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Loss function used in training, e.g. nn.MSELoss.",
"title": "Loss Function"
},
"loss_function_kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `loss_function`",
"title": "Loss Function Kwargs",
"type": "object"
},
"optimizer": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "optimizer, e.g. torch.optim.Adam",
"title": "Optimizer"
},
"optimizer_kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `optimizer`",
"title": "Optimizer Kwargs",
"type": "object"
},
"regularization": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Regularization techniques used during training, e.g. drop-out or weight decay.",
"title": "Regularization"
},
"training_duration": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Total training duration in hours.",
"title": "Training Duration"
}
},
"title": "model.v0_5.TrainingDetails",
"type": "object"
},
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
}
},
"additionalProperties": true,
"properties": {
"bioimageio": {
"$ref": "#/$defs/BioimageioConfig"
}
},
"title": "model.v0_5.Config",
"type": "object"
}
Fields:
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
DataDependentSize
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"min": {
"default": 1,
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"max": {
"anyOf": [
{
"exclusiveMinimum": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Max"
}
},
"title": "model.v0_5.DataDependentSize",
"type": "object"
}
Fields:
Validators:
-
_validate_max_gt_min
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
validate_size
¤
validate_size(size: int) -> int
Source code in src/bioimageio/spec/model/v0_5.py
343 344 345 346 347 348 349 350 | |
DatasetDescr
pydantic-model
¤
Bases: GenericDescrBase
A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage processing.
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"BadgeDescr": {
"additionalProperties": false,
"description": "A custom badge",
"properties": {
"label": {
"description": "badge label to display on hover",
"examples": [
"Open in Colab"
],
"title": "Label",
"type": "string"
},
"icon": {
"anyOf": [
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "badge icon (included in bioimage.io package if not a URL)",
"examples": [
"https://colab.research.google.com/assets/colab-badge.svg"
],
"title": "Icon"
},
"url": {
"description": "target URL",
"examples": [
"https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb"
],
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
}
},
"required": [
"label",
"url"
],
"title": "generic.v0_2.BadgeDescr",
"type": "object"
},
"BioimageioConfig": {
"additionalProperties": true,
"description": "bioimage.io internal metadata.",
"properties": {},
"title": "generic.v0_3.BioimageioConfig",
"type": "object"
},
"CiteEntry": {
"additionalProperties": false,
"description": "A citation that should be referenced in work using this resource.",
"properties": {
"text": {
"description": "free text description",
"title": "Text",
"type": "string"
},
"doi": {
"anyOf": [
{
"description": "A digital object identifier, see https://www.doi.org/",
"pattern": "^10\\.[0-9]{4}.+$",
"title": "Doi",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details.\nNote:\n Either **doi** or **url** have to be specified.",
"title": "Doi"
},
"url": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to cite (preferably specify a **doi** instead/also).\nNote:\n Either **doi** or **url** have to be specified.",
"title": "Url"
}
},
"required": [
"text"
],
"title": "generic.v0_3.CiteEntry",
"type": "object"
},
"Config": {
"additionalProperties": true,
"description": "A place to store additional metadata (often tool specific).\n\nSuch additional metadata is typically set programmatically by the respective tool\nor by people with specific insights into the tool.\nIf you want to store additional metadata that does not match any of the other\nfields, think of a key unlikely to collide with anyone elses use-case/tool and save\nit here.\n\nPlease consider creating [an issue in the bioimageio.spec repository](https://github.com/bioimage-io/spec-bioimage-io/issues/new?template=Blank+issue)\nif you are not sure if an existing field could cover your use case\nor if you think such a field should exist.",
"properties": {
"bioimageio": {
"$ref": "#/$defs/BioimageioConfig"
}
},
"title": "generic.v0_3.Config",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"Maintainer": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Name"
},
"github_user": {
"title": "Github User",
"type": "string"
}
},
"required": [
"github_user"
],
"title": "generic.v0_3.Maintainer",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"Uploader": {
"additionalProperties": false,
"properties": {
"email": {
"description": "Email",
"format": "email",
"title": "Email",
"type": "string"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "name",
"title": "Name"
}
},
"required": [
"email"
],
"title": "generic.v0_2.Uploader",
"type": "object"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"description": "A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage\nprocessing.",
"properties": {
"name": {
"description": "A human-friendly name of the resource description.\nMay only contains letters, digits, underscore, minus, parentheses and spaces.",
"maxLength": 128,
"minLength": 5,
"title": "Name",
"type": "string"
},
"description": {
"default": "",
"description": "A string containing a brief description.",
"maxLength": 1024,
"title": "Description",
"type": "string"
},
"covers": {
"description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1 or 1:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg')",
"examples": [
[
"cover.png"
]
],
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Covers",
"type": "array"
},
"id_emoji": {
"anyOf": [
{
"examples": [
"\ud83e\udd88",
"\ud83e\udda5"
],
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "UTF-8 emoji for display alongside the `id`.",
"title": "Id Emoji"
},
"authors": {
"description": "The authors are the creators of this resource description and the primary points of contact.",
"items": {
"$ref": "#/$defs/Author"
},
"title": "Authors",
"type": "array"
},
"attachments": {
"description": "file attachments",
"items": {
"$ref": "#/$defs/FileDescr"
},
"title": "Attachments",
"type": "array"
},
"cite": {
"description": "citations",
"items": {
"$ref": "#/$defs/CiteEntry"
},
"title": "Cite",
"type": "array"
},
"license": {
"anyOf": [
{
"enum": [
"0BSD",
"3D-Slicer-1.0",
"AAL",
"Abstyles",
"AdaCore-doc",
"Adobe-2006",
"Adobe-Display-PostScript",
"Adobe-Glyph",
"Adobe-Utopia",
"ADSL",
"AFL-1.1",
"AFL-1.2",
"AFL-2.0",
"AFL-2.1",
"AFL-3.0",
"Afmparse",
"AGPL-1.0-only",
"AGPL-1.0-or-later",
"AGPL-3.0-only",
"AGPL-3.0-or-later",
"Aladdin",
"AMD-newlib",
"AMDPLPA",
"AML",
"AML-glslang",
"AMPAS",
"ANTLR-PD",
"ANTLR-PD-fallback",
"any-OSI",
"any-OSI-perl-modules",
"Apache-1.0",
"Apache-1.1",
"Apache-2.0",
"APAFML",
"APL-1.0",
"App-s2p",
"APSL-1.0",
"APSL-1.1",
"APSL-1.2",
"APSL-2.0",
"Arphic-1999",
"Artistic-1.0",
"Artistic-1.0-cl8",
"Artistic-1.0-Perl",
"Artistic-2.0",
"Artistic-dist",
"Aspell-RU",
"ASWF-Digital-Assets-1.0",
"ASWF-Digital-Assets-1.1",
"Baekmuk",
"Bahyph",
"Barr",
"bcrypt-Solar-Designer",
"Beerware",
"Bitstream-Charter",
"Bitstream-Vera",
"BitTorrent-1.0",
"BitTorrent-1.1",
"blessing",
"BlueOak-1.0.0",
"Boehm-GC",
"Boehm-GC-without-fee",
"Borceux",
"Brian-Gladman-2-Clause",
"Brian-Gladman-3-Clause",
"BSD-1-Clause",
"BSD-2-Clause",
"BSD-2-Clause-Darwin",
"BSD-2-Clause-first-lines",
"BSD-2-Clause-Patent",
"BSD-2-Clause-pkgconf-disclaimer",
"BSD-2-Clause-Views",
"BSD-3-Clause",
"BSD-3-Clause-acpica",
"BSD-3-Clause-Attribution",
"BSD-3-Clause-Clear",
"BSD-3-Clause-flex",
"BSD-3-Clause-HP",
"BSD-3-Clause-LBNL",
"BSD-3-Clause-Modification",
"BSD-3-Clause-No-Military-License",
"BSD-3-Clause-No-Nuclear-License",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-No-Nuclear-Warranty",
"BSD-3-Clause-Open-MPI",
"BSD-3-Clause-Sun",
"BSD-4-Clause",
"BSD-4-Clause-Shortened",
"BSD-4-Clause-UC",
"BSD-4.3RENO",
"BSD-4.3TAHOE",
"BSD-Advertising-Acknowledgement",
"BSD-Attribution-HPND-disclaimer",
"BSD-Inferno-Nettverk",
"BSD-Protection",
"BSD-Source-beginning-file",
"BSD-Source-Code",
"BSD-Systemics",
"BSD-Systemics-W3Works",
"BSL-1.0",
"BUSL-1.1",
"bzip2-1.0.6",
"C-UDA-1.0",
"CAL-1.0",
"CAL-1.0-Combined-Work-Exception",
"Caldera",
"Caldera-no-preamble",
"Catharon",
"CATOSL-1.1",
"CC-BY-1.0",
"CC-BY-2.0",
"CC-BY-2.5",
"CC-BY-2.5-AU",
"CC-BY-3.0",
"CC-BY-3.0-AT",
"CC-BY-3.0-AU",
"CC-BY-3.0-DE",
"CC-BY-3.0-IGO",
"CC-BY-3.0-NL",
"CC-BY-3.0-US",
"CC-BY-4.0",
"CC-BY-NC-1.0",
"CC-BY-NC-2.0",
"CC-BY-NC-2.5",
"CC-BY-NC-3.0",
"CC-BY-NC-3.0-DE",
"CC-BY-NC-4.0",
"CC-BY-NC-ND-1.0",
"CC-BY-NC-ND-2.0",
"CC-BY-NC-ND-2.5",
"CC-BY-NC-ND-3.0",
"CC-BY-NC-ND-3.0-DE",
"CC-BY-NC-ND-3.0-IGO",
"CC-BY-NC-ND-4.0",
"CC-BY-NC-SA-1.0",
"CC-BY-NC-SA-2.0",
"CC-BY-NC-SA-2.0-DE",
"CC-BY-NC-SA-2.0-FR",
"CC-BY-NC-SA-2.0-UK",
"CC-BY-NC-SA-2.5",
"CC-BY-NC-SA-3.0",
"CC-BY-NC-SA-3.0-DE",
"CC-BY-NC-SA-3.0-IGO",
"CC-BY-NC-SA-4.0",
"CC-BY-ND-1.0",
"CC-BY-ND-2.0",
"CC-BY-ND-2.5",
"CC-BY-ND-3.0",
"CC-BY-ND-3.0-DE",
"CC-BY-ND-4.0",
"CC-BY-SA-1.0",
"CC-BY-SA-2.0",
"CC-BY-SA-2.0-UK",
"CC-BY-SA-2.1-JP",
"CC-BY-SA-2.5",
"CC-BY-SA-3.0",
"CC-BY-SA-3.0-AT",
"CC-BY-SA-3.0-DE",
"CC-BY-SA-3.0-IGO",
"CC-BY-SA-4.0",
"CC-PDDC",
"CC-PDM-1.0",
"CC-SA-1.0",
"CC0-1.0",
"CDDL-1.0",
"CDDL-1.1",
"CDL-1.0",
"CDLA-Permissive-1.0",
"CDLA-Permissive-2.0",
"CDLA-Sharing-1.0",
"CECILL-1.0",
"CECILL-1.1",
"CECILL-2.0",
"CECILL-2.1",
"CECILL-B",
"CECILL-C",
"CERN-OHL-1.1",
"CERN-OHL-1.2",
"CERN-OHL-P-2.0",
"CERN-OHL-S-2.0",
"CERN-OHL-W-2.0",
"CFITSIO",
"check-cvs",
"checkmk",
"ClArtistic",
"Clips",
"CMU-Mach",
"CMU-Mach-nodoc",
"CNRI-Jython",
"CNRI-Python",
"CNRI-Python-GPL-Compatible",
"COIL-1.0",
"Community-Spec-1.0",
"Condor-1.1",
"copyleft-next-0.3.0",
"copyleft-next-0.3.1",
"Cornell-Lossless-JPEG",
"CPAL-1.0",
"CPL-1.0",
"CPOL-1.02",
"Cronyx",
"Crossword",
"CryptoSwift",
"CrystalStacker",
"CUA-OPL-1.0",
"Cube",
"curl",
"cve-tou",
"D-FSL-1.0",
"DEC-3-Clause",
"diffmark",
"DL-DE-BY-2.0",
"DL-DE-ZERO-2.0",
"DOC",
"DocBook-DTD",
"DocBook-Schema",
"DocBook-Stylesheet",
"DocBook-XML",
"Dotseqn",
"DRL-1.0",
"DRL-1.1",
"DSDP",
"dtoa",
"dvipdfm",
"ECL-1.0",
"ECL-2.0",
"EFL-1.0",
"EFL-2.0",
"eGenix",
"Elastic-2.0",
"Entessa",
"EPICS",
"EPL-1.0",
"EPL-2.0",
"ErlPL-1.1",
"etalab-2.0",
"EUDatagrid",
"EUPL-1.0",
"EUPL-1.1",
"EUPL-1.2",
"Eurosym",
"Fair",
"FBM",
"FDK-AAC",
"Ferguson-Twofish",
"Frameworx-1.0",
"FreeBSD-DOC",
"FreeImage",
"FSFAP",
"FSFAP-no-warranty-disclaimer",
"FSFUL",
"FSFULLR",
"FSFULLRSD",
"FSFULLRWD",
"FSL-1.1-ALv2",
"FSL-1.1-MIT",
"FTL",
"Furuseth",
"fwlw",
"Game-Programming-Gems",
"GCR-docs",
"GD",
"generic-xts",
"GFDL-1.1-invariants-only",
"GFDL-1.1-invariants-or-later",
"GFDL-1.1-no-invariants-only",
"GFDL-1.1-no-invariants-or-later",
"GFDL-1.1-only",
"GFDL-1.1-or-later",
"GFDL-1.2-invariants-only",
"GFDL-1.2-invariants-or-later",
"GFDL-1.2-no-invariants-only",
"GFDL-1.2-no-invariants-or-later",
"GFDL-1.2-only",
"GFDL-1.2-or-later",
"GFDL-1.3-invariants-only",
"GFDL-1.3-invariants-or-later",
"GFDL-1.3-no-invariants-only",
"GFDL-1.3-no-invariants-or-later",
"GFDL-1.3-only",
"GFDL-1.3-or-later",
"Giftware",
"GL2PS",
"Glide",
"Glulxe",
"GLWTPL",
"gnuplot",
"GPL-1.0-only",
"GPL-1.0-or-later",
"GPL-2.0-only",
"GPL-2.0-or-later",
"GPL-3.0-only",
"GPL-3.0-or-later",
"Graphics-Gems",
"gSOAP-1.3b",
"gtkbook",
"Gutmann",
"HaskellReport",
"HDF5",
"hdparm",
"HIDAPI",
"Hippocratic-2.1",
"HP-1986",
"HP-1989",
"HPND",
"HPND-DEC",
"HPND-doc",
"HPND-doc-sell",
"HPND-export-US",
"HPND-export-US-acknowledgement",
"HPND-export-US-modify",
"HPND-export2-US",
"HPND-Fenneberg-Livingston",
"HPND-INRIA-IMAG",
"HPND-Intel",
"HPND-Kevlin-Henney",
"HPND-Markus-Kuhn",
"HPND-merchantability-variant",
"HPND-MIT-disclaimer",
"HPND-Netrek",
"HPND-Pbmplus",
"HPND-sell-MIT-disclaimer-xserver",
"HPND-sell-regexpr",
"HPND-sell-variant",
"HPND-sell-variant-MIT-disclaimer",
"HPND-sell-variant-MIT-disclaimer-rev",
"HPND-UC",
"HPND-UC-export-US",
"HTMLTIDY",
"IBM-pibs",
"ICU",
"IEC-Code-Components-EULA",
"IJG",
"IJG-short",
"ImageMagick",
"iMatix",
"Imlib2",
"Info-ZIP",
"Inner-Net-2.0",
"InnoSetup",
"Intel",
"Intel-ACPI",
"Interbase-1.0",
"IPA",
"IPL-1.0",
"ISC",
"ISC-Veillard",
"Jam",
"JasPer-2.0",
"jove",
"JPL-image",
"JPNIC",
"JSON",
"Kastrup",
"Kazlib",
"Knuth-CTAN",
"LAL-1.2",
"LAL-1.3",
"Latex2e",
"Latex2e-translated-notice",
"Leptonica",
"LGPL-2.0-only",
"LGPL-2.0-or-later",
"LGPL-2.1-only",
"LGPL-2.1-or-later",
"LGPL-3.0-only",
"LGPL-3.0-or-later",
"LGPLLR",
"Libpng",
"libpng-1.6.35",
"libpng-2.0",
"libselinux-1.0",
"libtiff",
"libutil-David-Nugent",
"LiLiQ-P-1.1",
"LiLiQ-R-1.1",
"LiLiQ-Rplus-1.1",
"Linux-man-pages-1-para",
"Linux-man-pages-copyleft",
"Linux-man-pages-copyleft-2-para",
"Linux-man-pages-copyleft-var",
"Linux-OpenIB",
"LOOP",
"LPD-document",
"LPL-1.0",
"LPL-1.02",
"LPPL-1.0",
"LPPL-1.1",
"LPPL-1.2",
"LPPL-1.3a",
"LPPL-1.3c",
"lsof",
"Lucida-Bitmap-Fonts",
"LZMA-SDK-9.11-to-9.20",
"LZMA-SDK-9.22",
"Mackerras-3-Clause",
"Mackerras-3-Clause-acknowledgment",
"magaz",
"mailprio",
"MakeIndex",
"man2html",
"Martin-Birgmeier",
"McPhee-slideshow",
"metamail",
"Minpack",
"MIPS",
"MirOS",
"MIT",
"MIT-0",
"MIT-advertising",
"MIT-Click",
"MIT-CMU",
"MIT-enna",
"MIT-feh",
"MIT-Festival",
"MIT-Khronos-old",
"MIT-Modern-Variant",
"MIT-open-group",
"MIT-testregex",
"MIT-Wu",
"MITNFA",
"MMIXware",
"Motosoto",
"MPEG-SSG",
"mpi-permissive",
"mpich2",
"MPL-1.0",
"MPL-1.1",
"MPL-2.0",
"MPL-2.0-no-copyleft-exception",
"mplus",
"MS-LPL",
"MS-PL",
"MS-RL",
"MTLL",
"MulanPSL-1.0",
"MulanPSL-2.0",
"Multics",
"Mup",
"NAIST-2003",
"NASA-1.3",
"Naumen",
"NBPL-1.0",
"NCBI-PD",
"NCGL-UK-2.0",
"NCL",
"NCSA",
"NetCDF",
"Newsletr",
"NGPL",
"ngrep",
"NICTA-1.0",
"NIST-PD",
"NIST-PD-fallback",
"NIST-Software",
"NLOD-1.0",
"NLOD-2.0",
"NLPL",
"Nokia",
"NOSL",
"Noweb",
"NPL-1.0",
"NPL-1.1",
"NPOSL-3.0",
"NRL",
"NTIA-PD",
"NTP",
"NTP-0",
"O-UDA-1.0",
"OAR",
"OCCT-PL",
"OCLC-2.0",
"ODbL-1.0",
"ODC-By-1.0",
"OFFIS",
"OFL-1.0",
"OFL-1.0-no-RFN",
"OFL-1.0-RFN",
"OFL-1.1",
"OFL-1.1-no-RFN",
"OFL-1.1-RFN",
"OGC-1.0",
"OGDL-Taiwan-1.0",
"OGL-Canada-2.0",
"OGL-UK-1.0",
"OGL-UK-2.0",
"OGL-UK-3.0",
"OGTSL",
"OLDAP-1.1",
"OLDAP-1.2",
"OLDAP-1.3",
"OLDAP-1.4",
"OLDAP-2.0",
"OLDAP-2.0.1",
"OLDAP-2.1",
"OLDAP-2.2",
"OLDAP-2.2.1",
"OLDAP-2.2.2",
"OLDAP-2.3",
"OLDAP-2.4",
"OLDAP-2.5",
"OLDAP-2.6",
"OLDAP-2.7",
"OLDAP-2.8",
"OLFL-1.3",
"OML",
"OpenPBS-2.3",
"OpenSSL",
"OpenSSL-standalone",
"OpenVision",
"OPL-1.0",
"OPL-UK-3.0",
"OPUBL-1.0",
"OSET-PL-2.1",
"OSL-1.0",
"OSL-1.1",
"OSL-2.0",
"OSL-2.1",
"OSL-3.0",
"PADL",
"Parity-6.0.0",
"Parity-7.0.0",
"PDDL-1.0",
"PHP-3.0",
"PHP-3.01",
"Pixar",
"pkgconf",
"Plexus",
"pnmstitch",
"PolyForm-Noncommercial-1.0.0",
"PolyForm-Small-Business-1.0.0",
"PostgreSQL",
"PPL",
"PSF-2.0",
"psfrag",
"psutils",
"Python-2.0",
"Python-2.0.1",
"python-ldap",
"Qhull",
"QPL-1.0",
"QPL-1.0-INRIA-2004",
"radvd",
"Rdisc",
"RHeCos-1.1",
"RPL-1.1",
"RPL-1.5",
"RPSL-1.0",
"RSA-MD",
"RSCPL",
"Ruby",
"Ruby-pty",
"SAX-PD",
"SAX-PD-2.0",
"Saxpath",
"SCEA",
"SchemeReport",
"Sendmail",
"Sendmail-8.23",
"Sendmail-Open-Source-1.1",
"SGI-B-1.0",
"SGI-B-1.1",
"SGI-B-2.0",
"SGI-OpenGL",
"SGP4",
"SHL-0.5",
"SHL-0.51",
"SimPL-2.0",
"SISSL",
"SISSL-1.2",
"SL",
"Sleepycat",
"SMAIL-GPL",
"SMLNJ",
"SMPPL",
"SNIA",
"snprintf",
"SOFA",
"softSurfer",
"Soundex",
"Spencer-86",
"Spencer-94",
"Spencer-99",
"SPL-1.0",
"ssh-keyscan",
"SSH-OpenSSH",
"SSH-short",
"SSLeay-standalone",
"SSPL-1.0",
"SugarCRM-1.1.3",
"SUL-1.0",
"Sun-PPP",
"Sun-PPP-2000",
"SunPro",
"SWL",
"swrule",
"Symlinks",
"TAPR-OHL-1.0",
"TCL",
"TCP-wrappers",
"TermReadKey",
"TGPPL-1.0",
"ThirdEye",
"threeparttable",
"TMate",
"TORQUE-1.1",
"TOSL",
"TPDL",
"TPL-1.0",
"TrustedQSL",
"TTWL",
"TTYP0",
"TU-Berlin-1.0",
"TU-Berlin-2.0",
"Ubuntu-font-1.0",
"UCAR",
"UCL-1.0",
"ulem",
"UMich-Merit",
"Unicode-3.0",
"Unicode-DFS-2015",
"Unicode-DFS-2016",
"Unicode-TOU",
"UnixCrypt",
"Unlicense",
"Unlicense-libtelnet",
"Unlicense-libwhirlpool",
"UPL-1.0",
"URT-RLE",
"Vim",
"VOSTROM",
"VSL-1.0",
"W3C",
"W3C-19980720",
"W3C-20150513",
"w3m",
"Watcom-1.0",
"Widget-Workshop",
"Wsuipa",
"WTFPL",
"wwl",
"X11",
"X11-distribute-modifications-variant",
"X11-swapped",
"Xdebug-1.03",
"Xerox",
"Xfig",
"XFree86-1.1",
"xinetd",
"xkeyboard-config-Zinoviev",
"xlock",
"Xnet",
"xpp",
"XSkat",
"xzoom",
"YPL-1.0",
"YPL-1.1",
"Zed",
"Zeeff",
"Zend-2.0",
"Zimbra-1.3",
"Zimbra-1.4",
"Zlib",
"zlib-acknowledgement",
"ZPL-1.1",
"ZPL-2.0",
"ZPL-2.1"
],
"title": "LicenseId",
"type": "string"
},
{
"enum": [
"AGPL-1.0",
"AGPL-3.0",
"BSD-2-Clause-FreeBSD",
"BSD-2-Clause-NetBSD",
"bzip2-1.0.5",
"eCos-2.0",
"GFDL-1.1",
"GFDL-1.2",
"GFDL-1.3",
"GPL-1.0",
"GPL-1.0+",
"GPL-2.0",
"GPL-2.0+",
"GPL-2.0-with-autoconf-exception",
"GPL-2.0-with-bison-exception",
"GPL-2.0-with-classpath-exception",
"GPL-2.0-with-font-exception",
"GPL-2.0-with-GCC-exception",
"GPL-3.0",
"GPL-3.0+",
"GPL-3.0-with-autoconf-exception",
"GPL-3.0-with-GCC-exception",
"LGPL-2.0",
"LGPL-2.0+",
"LGPL-2.1",
"LGPL-2.1+",
"LGPL-3.0",
"LGPL-3.0+",
"Net-SNMP",
"Nunit",
"StandardML-NJ",
"wxWindows"
],
"title": "DeprecatedLicenseId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose)\nto discuss your intentions with the community.",
"examples": [
"CC0-1.0",
"MIT",
"BSD-2-Clause"
],
"title": "License"
},
"git_repo": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A URL to the Git repository where the resource is being developed.",
"examples": [
"https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
],
"title": "Git Repo"
},
"icon": {
"anyOf": [
{
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An icon for illustration, e.g. on bioimage.io",
"title": "Icon"
},
"links": {
"description": "IDs of other bioimage.io resources",
"examples": [
[
"ilastik/ilastik",
"deepimagej/deepimagej",
"zero/notebook_u-net_3d_zerocostdl4mic"
]
],
"items": {
"type": "string"
},
"title": "Links",
"type": "array"
},
"uploader": {
"anyOf": [
{
"$ref": "#/$defs/Uploader"
},
{
"type": "null"
}
],
"default": null,
"description": "The person who uploaded the model (e.g. to bioimage.io)"
},
"maintainers": {
"description": "Maintainers of this resource.\nIf not specified, `authors` are maintainers and at least some of them has to specify their `github_user` name",
"items": {
"$ref": "#/$defs/Maintainer"
},
"title": "Maintainers",
"type": "array"
},
"tags": {
"description": "Associated tags",
"examples": [
[
"unet2d",
"pytorch",
"nucleus",
"segmentation",
"dsb2018"
]
],
"items": {
"type": "string"
},
"title": "Tags",
"type": "array"
},
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the resource following SemVer 2.0."
},
"version_comment": {
"anyOf": [
{
"maxLength": 512,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A comment on the version of the resource.",
"title": "Version Comment"
},
"format_version": {
"const": "0.3.0",
"description": "The **format** version of this resource specification",
"title": "Format Version",
"type": "string"
},
"documentation": {
"anyOf": [
{
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"examples": [
"https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
"README.md"
]
},
{
"type": "null"
}
],
"default": null,
"description": "URL or relative path to a markdown file encoded in UTF-8 with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.",
"title": "Documentation"
},
"badges": {
"description": "badges associated with this resource",
"items": {
"$ref": "#/$defs/BadgeDescr"
},
"title": "Badges",
"type": "array"
},
"config": {
"$ref": "#/$defs/Config",
"description": "A field for custom configuration that can contain any keys not present in the RDF spec.\nThis means you should not store, for example, a GitHub repo URL in `config` since there is a `git_repo` field.\nKeys in `config` may be very specific to a tool or consumer software. To avoid conflicting definitions,\nit is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,\nfor example:\n```yaml\nconfig:\n giraffe_neckometer: # here is the domain name\n length: 3837283\n address:\n home: zoo\n imagej: # config specific to ImageJ\n macro_dir: path/to/macro/file\n```\nIf possible, please use [`snake_case`](https://en.wikipedia.org/wiki/Snake_case) for keys in `config`.\nYou may want to list linked files additionally under `attachments` to include them when packaging a resource.\n(Packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains\nan altered rdf.yaml file with local references to the downloaded files.)"
},
"type": {
"const": "dataset",
"title": "Type",
"type": "string"
},
"id": {
"anyOf": [
{
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
"title": "Id"
},
"parent": {
"anyOf": [
{
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The description from which this one is derived",
"title": "Parent"
},
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "\"URL to the source of the dataset.",
"title": "Source"
}
},
"required": [
"name",
"format_version",
"type"
],
"title": "dataset 0.3.0",
"type": "object"
}
Fields:
-
_validation_summary(Optional[ValidationSummary]) -
name(Annotated[Annotated[str, RestrictCharacters(string.ascii_letters + string.digits + '_+- ()')], MinLen(5), MaxLen(128), warn(MaxLen(64), 'Name longer than 64 characters.', INFO)]) -
description(FAIR[Annotated[str, MaxLen(1024), warn(MaxLen(512), 'Description longer than 512 characters.')]]) -
covers(List[FileSource_cover]) -
id_emoji(Optional[Annotated[str, Len(min_length=1, max_length=2), Field(examples=['🦈', '🦥'])]]) -
authors(FAIR[List[Author]]) -
attachments(List[FileDescr_]) -
cite(FAIR[List[CiteEntry]]) -
license(FAIR[Annotated[Annotated[Union[LicenseId, DeprecatedLicenseId, None], Field(union_mode='left_to_right')], warn(Optional[LicenseId], '{value} is deprecated, see https://spdx.org/licenses/{value}.html'), Field(examples=['CC0-1.0', 'MIT', 'BSD-2-Clause'])]]) -
git_repo(Annotated[Optional[HttpUrl], Field(examples=['https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad'])]) -
icon(Union[Annotated[str, Len(min_length=1, max_length=2)], FileSource_, None]) -
links(Annotated[List[str], Field(examples=[('ilastik/ilastik', 'deepimagej/deepimagej', 'zero/notebook_u-net_3d_zerocostdl4mic')])]) -
uploader(Optional[Uploader]) -
maintainers(List[Maintainer]) -
tags(FAIR[Annotated[List[str], Field(examples=[('unet2d', 'pytorch', 'nucleus', 'segmentation', 'dsb2018')])]]) -
version(Optional[Version]) -
version_comment(Optional[Annotated[str, MaxLen(512)]]) -
format_version(Literal['0.3.0']) -
documentation(FAIR[Optional[FileSource_documentation]]) -
badges(List[BadgeDescr]) -
config(Config) -
type(Literal['dataset']) -
id(Optional[DatasetId]) -
parent(Optional[DatasetId]) -
source(FAIR[Optional[HttpUrl]])
Validators:
-
_check_maintainers_exist -
warn_about_tag_categories→tags -
_remove_version_number -
_convert_from_older_format -
_convert
authors
pydantic-field
¤
authors: FAIR[List[Author]]
The authors are the creators of this resource description and the primary points of contact.
config
pydantic-field
¤
config: Config
A field for custom configuration that can contain any keys not present in the RDF spec.
This means you should not store, for example, a GitHub repo URL in config since there is a git_repo field.
Keys in config may be very specific to a tool or consumer software. To avoid conflicting definitions,
it is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,
for example:
config:
giraffe_neckometer: # here is the domain name
length: 3837283
address:
home: zoo
imagej: # config specific to ImageJ
macro_dir: path/to/macro/file
snake_case for keys in config.
You may want to list linked files additionally under attachments to include them when packaging a resource.
(Packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains
an altered rdf.yaml file with local references to the downloaded files.)
description
pydantic-field
¤
description: FAIR[
Annotated[
str,
MaxLen(1024),
warn(
MaxLen(512),
"Description longer than 512 characters.",
),
]
] = ""
A string containing a brief description.
documentation
pydantic-field
¤
documentation: FAIR[Optional[FileSource_documentation]] = (
None
)
URL or relative path to a markdown file encoded in UTF-8 with additional documentation.
The recommended documentation file name is README.md. An .md suffix is mandatory.
file_name
property
¤
file_name: Optional[FileName]
File name of the bioimageio.yaml file the description was loaded from.
git_repo
pydantic-field
¤
git_repo: Annotated[
Optional[HttpUrl],
Field(
examples=[
"https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
]
),
] = None
A URL to the Git repository where the resource is being developed.
icon
pydantic-field
¤
icon: Union[
Annotated[str, Len(min_length=1, max_length=2)],
FileSource_,
None,
] = None
An icon for illustration, e.g. on bioimage.io
id
pydantic-field
¤
id: Optional[DatasetId] = None
bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.
id_emoji
pydantic-field
¤
id_emoji: Optional[
Annotated[
str,
Len(min_length=1, max_length=2),
Field(examples=["🦈", "🦥"]),
]
] = None
UTF-8 emoji for display alongside the id.
implemented_format_version_tuple
class-attribute
¤
implemented_format_version_tuple: Tuple[int, int, int]
license
pydantic-field
¤
license: FAIR[
Annotated[
Annotated[
Union[LicenseId, DeprecatedLicenseId, None],
Field(union_mode="left_to_right"),
],
warn(
Optional[LicenseId],
"{value} is deprecated, see https://spdx.org/licenses/{value}.html",
),
Field(examples=["CC0-1.0", "MIT", "BSD-2-Clause"]),
]
] = None
A SPDX license identifier. We do not support custom license beyond the SPDX license list, if you need that please open a GitHub issue to discuss your intentions with the community.
links
pydantic-field
¤
links: Annotated[
List[str],
Field(
examples=[
(
"ilastik/ilastik",
"deepimagej/deepimagej",
"zero/notebook_u-net_3d_zerocostdl4mic",
)
]
),
]
IDs of other bioimage.io resources
maintainers
pydantic-field
¤
maintainers: List[Maintainer]
Maintainers of this resource.
If not specified, authors are maintainers and at least some of them has to specify their github_user name
name
pydantic-field
¤
name: Annotated[
Annotated[
str,
RestrictCharacters(
string.ascii_letters + string.digits + "_+- ()"
),
],
MinLen(5),
MaxLen(128),
warn(
MaxLen(64), "Name longer than 64 characters.", INFO
),
]
A human-friendly name of the resource description. May only contains letters, digits, underscore, minus, parentheses and spaces.
parent
pydantic-field
¤
parent: Optional[DatasetId] = None
The description from which this one is derived
root
property
¤
root: Union[RootHttpUrl, DirectoryPath, ZipFile]
The URL/Path prefix to resolve any relative paths with.
tags
pydantic-field
¤
tags: FAIR[
Annotated[
List[str],
Field(
examples=[
(
"unet2d",
"pytorch",
"nucleus",
"segmentation",
"dsb2018",
)
]
),
]
]
Associated tags
uploader
pydantic-field
¤
uploader: Optional[Uploader] = None
The person who uploaded the model (e.g. to bioimage.io)
version
pydantic-field
¤
version: Optional[Version] = None
The version of the resource following SemVer 2.0.
version_comment
pydantic-field
¤
version_comment: Optional[Annotated[str, MaxLen(512)]] = (
None
)
A comment on the version of the resource.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any)
Source code in src/bioimageio/spec/_internal/common_nodes.py
200 201 202 203 204 205 206 207 208 209 210 211 212 | |
convert_from_old_format_wo_validation
classmethod
¤
convert_from_old_format_wo_validation(
data: BioimageioYamlContent,
) -> None
Convert metadata following an older format version to this classes' format without validating the result.
Source code in src/bioimageio/spec/generic/v0_3.py
449 450 451 452 453 454 | |
get_package_content
¤
get_package_content() -> Dict[
FileName, Union[FileDescr, BioimageioYamlContent]
]
Returns package content without creating the package.
Source code in src/bioimageio/spec/_internal/common_nodes.py
378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 | |
load
classmethod
¤
load(
data: BioimageioYamlContentView,
context: Optional[ValidationContext] = None,
) -> Union[Self, InvalidDescr]
factory method to create a resource description object
Source code in src/bioimageio/spec/_internal/common_nodes.py
214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
package
¤
package(
dest: Optional[
Union[ZipFile, IO[bytes], Path, str]
] = None,
) -> ZipFile
package the described resource as a zip archive
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Optional[Union[ZipFile, IO[bytes], Path, str]]
|
(path/bytes stream of) destination zipfile |
None
|
Source code in src/bioimageio/spec/_internal/common_nodes.py
348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 | |
warn_about_tag_categories
pydantic-validator
¤
warn_about_tag_categories(
value: List[str], info: ValidationInfo
) -> List[str]
Source code in src/bioimageio/spec/generic/v0_3.py
384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 | |
DatasetId
¤
Bases: ResourceId
flowchart TD
bioimageio.spec.model.v0_5.DatasetId[DatasetId]
bioimageio.spec.generic.v0_3.ResourceId[ResourceId]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec.generic.v0_3.ResourceId --> bioimageio.spec.model.v0_5.DatasetId
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.generic.v0_3.ResourceId
click bioimageio.spec.model.v0_5.DatasetId href "" "bioimageio.spec.model.v0_5.DatasetId"
click bioimageio.spec.generic.v0_3.ResourceId href "" "bioimageio.spec.generic.v0_3.ResourceId"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
Methods:
| Name | Description |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
Attributes:
| Name | Type | Description |
|---|---|---|
root_model |
Type[RootModel[Any]]
|
the pydantic root model to validate the string |
root_model
class-attribute
¤
root_model: Type[RootModel[Any]] = RootModel[
Annotated[
NotEmpty[str],
RestrictCharacters(
string.ascii_lowercase + string.digits + "_-/."
),
annotated_types.Predicate(
lambda s: (
not (s.startswith("/") or s.endswith("/"))
)
),
]
]
the pydantic root model to validate the string
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
Datetime
¤
Bases: RootModel[Annotated[datetime, BeforeValidator(_validate_datetime), PrettyPlainSerializer(_serialize_datetime_json, when_used='json-unless-none')]]
flowchart TD
bioimageio.spec.model.v0_5.Datetime[Datetime]
click bioimageio.spec.model.v0_5.Datetime href "" "bioimageio.spec.model.v0_5.Datetime"
Timestamp in ISO 8601 format with a few restrictions listed here.
Methods:
| Name | Description |
|---|---|
now |
|
now
classmethod
¤
now()
Source code in src/bioimageio/spec/_internal/types.py
135 136 137 | |
DeprecatedLicenseId
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.DeprecatedLicenseId[DeprecatedLicenseId]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.DeprecatedLicenseId
click bioimageio.spec.model.v0_5.DeprecatedLicenseId href "" "bioimageio.spec.model.v0_5.DeprecatedLicenseId"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
Methods:
| Name | Description |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
Attributes:
| Name | Type | Description |
|---|---|---|
root_model |
Type[RootModel[Any]]
|
the pydantic root model to validate the string |
root_model
class-attribute
¤
the pydantic root model to validate the string
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
Doi
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.Doi[Doi]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.Doi
click bioimageio.spec.model.v0_5.Doi href "" "bioimageio.spec.model.v0_5.Doi"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
A digital object identifier, see https://www.doi.org/
Methods:
| Name | Description |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
Attributes:
| Name | Type | Description |
|---|---|---|
root_model |
Type[RootModel[Any]]
|
the pydantic root model to validate the string |
root_model
class-attribute
¤
root_model: Type[RootModel[Any]] = RootModel[
Annotated[str, StringConstraints(pattern=DOI_REGEX)]
]
the pydantic root model to validate the string
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
EnsureDtypeDescr
pydantic-model
¤
Bases: NodeWithExplicitlySetFields
Cast the tensor data type to EnsureDtypeKwargs.dtype (if not matching).
This can for example be used to ensure the inner neural network model gets a different input tensor data type than the fully described bioimage.io model does.
Examples:
The described bioimage.io model (incl. preprocessing) accepts any float32-compatible tensor, normalizes it with percentiles and clipping and then casts it to uint8, which is what the neural network in this example expects. - in YAML
inputs:
- data:
type: float32 # described bioimage.io model is compatible with any float32 input tensor
preprocessing:
- id: scale_range
kwargs:
axes: ['y', 'x']
max_percentile: 99.8
min_percentile: 5.0
- id: clip
kwargs:
min: 0.0
max: 1.0
- id: ensure_dtype # the neural network of the model requires uint8
kwargs:
dtype: uint8
Show JSON schema:
{
"$defs": {
"EnsureDtypeKwargs": {
"additionalProperties": false,
"description": "key word arguments for [EnsureDtypeDescr][]",
"properties": {
"dtype": {
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"title": "Dtype",
"type": "string"
}
},
"required": [
"dtype"
],
"title": "model.v0_5.EnsureDtypeKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Cast the tensor data type to `EnsureDtypeKwargs.dtype` (if not matching).\n\nThis can for example be used to ensure the inner neural network model gets a\ndifferent input tensor data type than the fully described bioimage.io model does.\n\nExamples:\n The described bioimage.io model (incl. preprocessing) accepts any\n float32-compatible tensor, normalizes it with percentiles and clipping and then\n casts it to uint8, which is what the neural network in this example expects.\n - in YAML\n ```yaml\n inputs:\n - data:\n type: float32 # described bioimage.io model is compatible with any float32 input tensor\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n - id: ensure_dtype # the neural network of the model requires uint8\n kwargs:\n dtype: uint8\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(kwargs=ClipKwargs(min=0.0, max=1.0)),\n ... EnsureDtypeDescr(kwargs=EnsureDtypeKwargs(dtype=\"uint8\")),\n ... ]",
"properties": {
"id": {
"const": "ensure_dtype",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/EnsureDtypeKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.EnsureDtypeDescr",
"type": "object"
}
Fields:
-
id(Literal['ensure_dtype']) -
kwargs(EnsureDtypeKwargs)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
EnsureDtypeKwargs
pydantic-model
¤
Bases: KwargsNode
key word arguments for EnsureDtypeDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for [EnsureDtypeDescr][]",
"properties": {
"dtype": {
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"title": "Dtype",
"type": "string"
}
},
"required": [
"dtype"
],
"title": "model.v0_5.EnsureDtypeKwargs",
"type": "object"
}
Fields:
-
dtype(Literal['float32', 'float64', 'uint8', 'int8', 'uint16', 'int16', 'uint32', 'int32', 'uint64', 'int64', 'bool'])
dtype
pydantic-field
¤
dtype: Literal[
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool",
]
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
426 427 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
420 421 422 423 424 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
417 418 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
EnvironmentalImpact
pydantic-model
¤
Bases: Node
Environmental considerations for model training and deployment.
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
Show JSON schema:
{
"additionalProperties": true,
"description": "Environmental considerations for model training and deployment.\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
"properties": {
"hardware_type": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "GPU/CPU specifications",
"title": "Hardware Type"
},
"hours_used": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Total compute hours",
"title": "Hours Used"
},
"cloud_provider": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "If applicable",
"title": "Cloud Provider"
},
"compute_region": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Geographic location",
"title": "Compute Region"
},
"co2_emitted": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "kg CO2 equivalent\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
"title": "Co2 Emitted"
}
},
"title": "model.v0_5.EnvironmentalImpact",
"type": "object"
}
Fields:
-
hardware_type(Optional[str]) -
hours_used(Optional[float]) -
cloud_provider(Optional[str]) -
compute_region(Optional[str]) -
co2_emitted(Optional[float])
co2_emitted
pydantic-field
¤
co2_emitted: Optional[float] = None
kg CO2 equivalent
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
format_md
¤
format_md()
Filled Markdown template section following Hugging Face Model Card Template.
Source code in src/bioimageio/spec/model/v0_5.py
3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
Evaluation
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"additionalProperties": true,
"properties": {
"model_id": {
"anyOf": [
{
"minLength": 1,
"title": "ModelId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Model being evaluated.",
"title": "Model Id"
},
"dataset_id": {
"description": "Dataset used for evaluation.",
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
"dataset_source": {
"description": "Source of the dataset.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
"dataset_role": {
"description": "Role of the dataset used for evaluation.\n\n- `train`: dataset was (part of) the training data\n- `validation`: dataset was (part of) the validation data used during training, e.g. used for model selection or hyperparameter tuning\n- `test`: dataset was (part of) the designated test data; not used during training or validation, but acquired from the same source/distribution as training data\n- `independent`: dataset is entirely independent test data; not used during training or validation, and acquired from a different source/distribution than training data\n- `unknown`: role of the dataset is unknown; choose this if you are not certain if (a subset) of the data was seen by the model during training.",
"enum": [
"train",
"validation",
"test",
"independent",
"unknown"
],
"title": "Dataset Role",
"type": "string"
},
"sample_count": {
"description": "Number of evaluated samples.",
"title": "Sample Count",
"type": "integer"
},
"evaluation_factors": {
"description": "(Abbreviations of) each evaluation factor.\n\nEvaluation factors are criteria along which model performance is evaluated, e.g. different image conditions\nlike 'low SNR', 'high cell density', or different biological conditions like 'cell type A', 'cell type B'.\nAn 'overall' factor may be included to summarize performance across all conditions.",
"items": {
"maxLength": 16,
"type": "string"
},
"title": "Evaluation Factors",
"type": "array"
},
"evaluation_factors_long": {
"description": "Descriptions (long form) of each evaluation factor.",
"items": {
"type": "string"
},
"title": "Evaluation Factors Long",
"type": "array"
},
"metrics": {
"description": "(Abbreviations of) metrics used for evaluation.",
"items": {
"maxLength": 16,
"type": "string"
},
"title": "Metrics",
"type": "array"
},
"metrics_long": {
"description": "Description of each metric used.",
"items": {
"type": "string"
},
"title": "Metrics Long",
"type": "array"
},
"results": {
"description": "Results for each metric (rows; outer list) and each evaluation factor (columns; inner list).",
"items": {
"items": {
"anyOf": [
{
"type": "string"
},
{
"type": "number"
},
{
"type": "integer"
}
]
},
"type": "array"
},
"title": "Results",
"type": "array"
},
"results_summary": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Interpretation of results for general audience.\n\nConsider:\n - Overall model performance\n - Comparison to existing methods\n - Limitations and areas for improvement",
"title": "Results Summary"
}
},
"required": [
"dataset_id",
"dataset_source",
"dataset_role",
"sample_count",
"evaluation_factors",
"evaluation_factors_long",
"metrics",
"metrics_long",
"results"
],
"title": "model.v0_5.Evaluation",
"type": "object"
}
Fields:
-
model_id(Optional[ModelId]) -
dataset_id(DatasetId) -
dataset_source(HttpUrl) -
dataset_role(Literal['train', 'validation', 'test', 'independent', 'unknown']) -
sample_count(int) -
evaluation_factors(List[Annotated[str, MaxLen(16)]]) -
evaluation_factors_long(List[str]) -
metrics(List[Annotated[str, MaxLen(16)]]) -
metrics_long(List[str]) -
results(List[List[Union[str, float, int]]]) -
results_summary(Optional[str])
Validators:
-
_validate_list_lengths
dataset_role
pydantic-field
¤
dataset_role: Literal[
"train", "validation", "test", "independent", "unknown"
]
Role of the dataset used for evaluation.
train: dataset was (part of) the training datavalidation: dataset was (part of) the validation data used during training, e.g. used for model selection or hyperparameter tuningtest: dataset was (part of) the designated test data; not used during training or validation, but acquired from the same source/distribution as training dataindependent: dataset is entirely independent test data; not used during training or validation, and acquired from a different source/distribution than training dataunknown: role of the dataset is unknown; choose this if you are not certain if (a subset) of the data was seen by the model during training.
evaluation_factors
pydantic-field
¤
evaluation_factors: List[Annotated[str, MaxLen(16)]]
(Abbreviations of) each evaluation factor.
Evaluation factors are criteria along which model performance is evaluated, e.g. different image conditions like 'low SNR', 'high cell density', or different biological conditions like 'cell type A', 'cell type B'. An 'overall' factor may be included to summarize performance across all conditions.
evaluation_factors_long
pydantic-field
¤
evaluation_factors_long: List[str]
Descriptions (long form) of each evaluation factor.
metrics
pydantic-field
¤
metrics: List[Annotated[str, MaxLen(16)]]
(Abbreviations of) metrics used for evaluation.
results
pydantic-field
¤
results: List[List[Union[str, float, int]]]
Results for each metric (rows; outer list) and each evaluation factor (columns; inner list).
results_summary
pydantic-field
¤
results_summary: Optional[str] = None
Interpretation of results for general audience.
Consider
- Overall model performance
- Comparison to existing methods
- Limitations and areas for improvement
format_md
¤
format_md()
Source code in src/bioimageio/spec/model/v0_5.py
2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
FileDescr
pydantic-model
¤
Bases: Node
A file description
Show JSON schema:
{
"$defs": {
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
}
},
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
}
Fields:
-
source(FileSource) -
sha256(Optional[Sha256])
Validators:
-
_validate_sha256
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
310 311 312 313 314 315 316 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
302 303 304 305 306 307 308 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | |
FixedZeroMeanUnitVarianceAlongAxisKwargs
pydantic-model
¤
Bases: KwargsNode
key word arguments for FixedZeroMeanUnitVarianceDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
"properties": {
"mean": {
"description": "The mean value(s) to normalize with.",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Mean",
"type": "array"
},
"std": {
"description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
"items": {
"minimum": 1e-06,
"type": "number"
},
"minItems": 1,
"title": "Std",
"type": "array"
},
"axis": {
"description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
"examples": [
"channel",
"index"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"mean",
"std",
"axis"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
"type": "object"
}
Fields:
-
mean(NotEmpty[List[float]]) -
std(NotEmpty[List[Annotated[float, Ge(1e-06)]]]) -
axis(Annotated[NonBatchAxisId, Field(examples=['channel', 'index'])])
Validators:
-
_mean_and_std_match
axis
pydantic-field
¤
axis: Annotated[
NonBatchAxisId, Field(examples=["channel", "index"])
]
The axis of the mean/std values to normalize each entry along that dimension separately.
std
pydantic-field
¤
std: NotEmpty[List[Annotated[float, Ge(1e-06)]]]
The standard deviation value(s) to normalize with.
Size must match mean values.
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
426 427 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
420 421 422 423 424 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
417 418 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
FixedZeroMeanUnitVarianceDescr
pydantic-model
¤
Bases: NodeWithExplicitlySetFields
Subtract a given mean and divide by the standard deviation.
Normalize with fixed, precomputed values for
FixedZeroMeanUnitVarianceKwargs.mean and FixedZeroMeanUnitVarianceKwargs.std
Use FixedZeroMeanUnitVarianceAlongAxisKwargs for independent scaling along given
axes.
Examples:
-
scalar value for whole tensor
- in YAML
preprocessing: - id: fixed_zero_mean_unit_variance kwargs: mean: 103.5 std: 13.7 - in Python
preprocessing = [FixedZeroMeanUnitVarianceDescr( ... kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7) ... )]
- in YAML
-
independently along an axis
- in YAML
preprocessing: - id: fixed_zero_mean_unit_variance kwargs: axis: channel mean: [101.5, 102.5, 103.5] std: [11.7, 12.7, 13.7] - in Python
preprocessing = [FixedZeroMeanUnitVarianceDescr( ... kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs( ... axis=AxisId("channel"), ... mean=[101.5, 102.5, 103.5], ... std=[11.7, 12.7, 13.7], ... ) ... )]
- in YAML
Show JSON schema:
{
"$defs": {
"FixedZeroMeanUnitVarianceAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
"properties": {
"mean": {
"description": "The mean value(s) to normalize with.",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Mean",
"type": "array"
},
"std": {
"description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
"items": {
"minimum": 1e-06,
"type": "number"
},
"minItems": 1,
"title": "Std",
"type": "array"
},
"axis": {
"description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
"examples": [
"channel",
"index"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"mean",
"std",
"axis"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
"type": "object"
},
"FixedZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
"properties": {
"mean": {
"description": "The mean value to normalize with.",
"title": "Mean",
"type": "number"
},
"std": {
"description": "The standard deviation value to normalize with.",
"minimum": 1e-06,
"title": "Std",
"type": "number"
}
},
"required": [
"mean",
"std"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Subtract a given mean and divide by the standard deviation.\n\nNormalize with fixed, precomputed values for\n`FixedZeroMeanUnitVarianceKwargs.mean` and `FixedZeroMeanUnitVarianceKwargs.std`\nUse `FixedZeroMeanUnitVarianceAlongAxisKwargs` for independent scaling along given\naxes.\n\nExamples:\n1. scalar value for whole tensor\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n mean: 103.5\n std: 13.7\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7)\n ... )]\n\n2. independently along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n axis: channel\n mean: [101.5, 102.5, 103.5]\n std: [11.7, 12.7, 13.7]\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... mean=[101.5, 102.5, 103.5],\n ... std=[11.7, 12.7, 13.7],\n ... )\n ... )]",
"properties": {
"id": {
"const": "fixed_zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceKwargs"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceDescr",
"type": "object"
}
Fields:
-
id(Literal['fixed_zero_mean_unit_variance']) -
kwargs(Union[FixedZeroMeanUnitVarianceKwargs, FixedZeroMeanUnitVarianceAlongAxisKwargs])
id
pydantic-field
¤
id: Literal["fixed_zero_mean_unit_variance"] = (
"fixed_zero_mean_unit_variance"
)
implemented_id
class-attribute
¤
implemented_id: Literal["fixed_zero_mean_unit_variance"] = (
"fixed_zero_mean_unit_variance"
)
kwargs
pydantic-field
¤
kwargs: Union[
FixedZeroMeanUnitVarianceKwargs,
FixedZeroMeanUnitVarianceAlongAxisKwargs,
]
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
FixedZeroMeanUnitVarianceKwargs
pydantic-model
¤
Bases: KwargsNode
key word arguments for FixedZeroMeanUnitVarianceDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
"properties": {
"mean": {
"description": "The mean value to normalize with.",
"title": "Mean",
"type": "number"
},
"std": {
"description": "The standard deviation value to normalize with.",
"minimum": 1e-06,
"title": "Std",
"type": "number"
}
},
"required": [
"mean",
"std"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
"type": "object"
}
Fields:
std
pydantic-field
¤
std: Annotated[float, Ge(1e-06)]
The standard deviation value to normalize with.
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
426 427 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
420 421 422 423 424 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
417 418 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
HttpUrl
¤
Bases: RootHttpUrl
flowchart TD
bioimageio.spec.model.v0_5.HttpUrl[HttpUrl]
bioimageio.spec._internal.root_url.RootHttpUrl[RootHttpUrl]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.root_url.RootHttpUrl --> bioimageio.spec.model.v0_5.HttpUrl
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.root_url.RootHttpUrl
click bioimageio.spec.model.v0_5.HttpUrl href "" "bioimageio.spec.model.v0_5.HttpUrl"
click bioimageio.spec._internal.root_url.RootHttpUrl href "" "bioimageio.spec._internal.root_url.RootHttpUrl"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
A URL with the HTTP or HTTPS scheme.
-
API Reference
spec
-
v0_2RelativeFilePath -
v0_2RelativeFilePath -
v0_2RelativeFilePath -
v0_2RelativeFilePath -
v0_3RelativeFilePath -
v0_3RelativeFilePath -
v0_3RelativeFilePath -
v0_3RelativeFilePath -
v0_4RelativeFilePath -
v0_5RelativeFilePath -
API Reference
commonRelativeFilePath
-
v0_2NotebookSource -
v0_3NotebookSource -
API Reference
commonFileSource
Methods:
| Name | Description |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
absolute |
analog to |
exists |
True if URL is available |
Attributes:
| Name | Type | Description |
|---|---|---|
host |
Optional[str]
|
|
parent |
RootHttpUrl
|
|
parents |
Iterable[RootHttpUrl]
|
iterate over all URL parents (max 100) |
path |
Optional[str]
|
|
root_model |
Type[RootModel[Any]]
|
the pydantic root model to validate the string |
scheme |
str
|
|
suffix |
str
|
|
root_model
class-attribute
¤
the pydantic root model to validate the string
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
absolute
¤
absolute()
analog to absolute method of pathlib.
Source code in src/bioimageio/spec/_internal/root_url.py
18 19 20 | |
exists
¤
exists()
True if URL is available
Source code in src/bioimageio/spec/_internal/url.py
145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 | |
Identifier
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.Identifier[Identifier]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.Identifier
click bioimageio.spec.model.v0_5.Identifier href "" "bioimageio.spec.model.v0_5.Identifier"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
Methods:
| Name | Description |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
Attributes:
| Name | Type | Description |
|---|---|---|
root_model |
Type[RootModel[Any]]
|
the pydantic root model to validate the string |
root_model
class-attribute
¤
the pydantic root model to validate the string
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
IndexAxisBase
pydantic-model
¤
Bases: AxisBase
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
}
},
"required": [
"type"
],
"title": "model.v0_5.IndexAxisBase",
"type": "object"
}
Fields:
-
description(Annotated[str, MaxLen(128)]) -
type(Literal['index']) -
id(NonBatchAxisId)
description
pydantic-field
¤
description: Annotated[str, MaxLen(128)] = ''
A short description of this axis beyond its type and id.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
IndexInputAxis
pydantic-model
¤
Bases: IndexAxisBase, _WithInputAxisSize
Show JSON schema:
{
"$defs": {
"ParameterizedSize": {
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.IndexInputAxis",
"type": "object"
}
Fields:
-
size(Annotated[Union[Annotated[int, Gt(0)], ParameterizedSize, SizeReference], Field(examples=[10, ParameterizedSize(min=32, step=16).model_dump(mode='json'), SizeReference(tensor_id=TensorId('t'), axis_id=AxisId('a'), offset=5).model_dump(mode='json')])]) -
id(NonBatchAxisId) -
description(Annotated[str, MaxLen(128)]) -
type(Literal['index']) -
concatenable(bool)
concatenable
pydantic-field
¤
concatenable: bool = False
If a model has a concatenable input axis, it can be processed blockwise,
splitting a longer sample axis into blocks matching its input tensor description.
Output axes are concatenable if they have a SizeReference to a concatenable
input axis.
description
pydantic-field
¤
description: Annotated[str, MaxLen(128)] = ''
A short description of this axis beyond its type and id.
size
pydantic-field
¤
size: Annotated[
Union[
Annotated[int, Gt(0)],
ParameterizedSize,
SizeReference,
],
Field(
examples=[
10,
ParameterizedSize(min=32, step=16).model_dump(
mode="json"
),
SizeReference(
tensor_id=TensorId("t"),
axis_id=AxisId("a"),
offset=5,
).model_dump(mode="json"),
]
),
]
The size/length of this axis can be specified as - fixed integer - parameterized series of valid sizes (ParameterizedSize) - reference to another axis with an optional offset (SizeReference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
IndexOutputAxis
pydantic-model
¤
Bases: IndexAxisBase
Show JSON schema:
{
"$defs": {
"DataDependentSize": {
"additionalProperties": false,
"properties": {
"min": {
"default": 1,
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"max": {
"anyOf": [
{
"exclusiveMinimum": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Max"
}
},
"title": "model.v0_5.DataDependentSize",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
},
{
"$ref": "#/$defs/DataDependentSize"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset ([SizeReference][])\n- data dependent size using [DataDependentSize][] (size is only known after model inference)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
}
},
"required": [
"type",
"size"
],
"title": "model.v0_5.IndexOutputAxis",
"type": "object"
}
Fields:
-
id(NonBatchAxisId) -
description(Annotated[str, MaxLen(128)]) -
type(Literal['index']) -
size(Annotated[Union[Annotated[int, Gt(0)], SizeReference, DataDependentSize], Field(examples=[10, SizeReference(tensor_id=TensorId('t'), axis_id=AxisId('a'), offset=5).model_dump(mode='json')])])
description
pydantic-field
¤
description: Annotated[str, MaxLen(128)] = ''
A short description of this axis beyond its type and id.
size
pydantic-field
¤
size: Annotated[
Union[
Annotated[int, Gt(0)],
SizeReference,
DataDependentSize,
],
Field(
examples=[
10,
SizeReference(
tensor_id=TensorId("t"),
axis_id=AxisId("a"),
offset=5,
).model_dump(mode="json"),
]
),
]
The size/length of this axis can be specified as - fixed integer - reference to another axis with an optional offset (SizeReference) - data dependent size using DataDependentSize (size is only known after model inference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
InputTensorDescr
pydantic-model
¤
Bases: TensorDescrBase[InputAxis]
Show JSON schema:
{
"$defs": {
"BatchAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "batch",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "batch",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"const": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
"title": "Size"
}
},
"required": [
"type"
],
"title": "model.v0_5.BatchAxis",
"type": "object"
},
"BinarizeAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for [BinarizeDescr][]",
"properties": {
"threshold": {
"description": "The fixed threshold values along `axis`",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Threshold",
"type": "array"
},
"axis": {
"description": "The `threshold` axis",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"threshold",
"axis"
],
"title": "model.v0_5.BinarizeAlongAxisKwargs",
"type": "object"
},
"BinarizeDescr": {
"additionalProperties": false,
"description": "Binarize the tensor with a fixed threshold.\n\nValues above [BinarizeKwargs.threshold][]/[BinarizeAlongAxisKwargs.threshold][]\nwill be set to one, values below the threshold to zero.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: binarize\n kwargs:\n axis: 'channel'\n threshold: [0.25, 0.5, 0.75]\n ```\n- in Python:\n >>> postprocessing = [BinarizeDescr(\n ... kwargs=BinarizeAlongAxisKwargs(\n ... axis=AxisId('channel'),\n ... threshold=[0.25, 0.5, 0.75],\n ... )\n ... )]",
"properties": {
"id": {
"const": "binarize",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/BinarizeKwargs"
},
{
"$ref": "#/$defs/BinarizeAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.BinarizeDescr",
"type": "object"
},
"BinarizeKwargs": {
"additionalProperties": false,
"description": "key word arguments for [BinarizeDescr][]",
"properties": {
"threshold": {
"description": "The fixed threshold",
"title": "Threshold",
"type": "number"
}
},
"required": [
"threshold"
],
"title": "model.v0_5.BinarizeKwargs",
"type": "object"
},
"ChannelAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "channel",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "channel",
"title": "Type",
"type": "string"
},
"channel_names": {
"items": {
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"minItems": 1,
"title": "Channel Names",
"type": "array"
}
},
"required": [
"type",
"channel_names"
],
"title": "model.v0_5.ChannelAxis",
"type": "object"
},
"ClipDescr": {
"additionalProperties": false,
"description": "Set tensor values below min to min and above max to max.\n\nSee `ScaleRangeDescr` for examples.",
"properties": {
"id": {
"const": "clip",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ClipKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ClipDescr",
"type": "object"
},
"ClipKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ClipDescr][]",
"properties": {
"min": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Minimum value for clipping.\n\nExclusive with [min_percentile][]",
"title": "Min"
},
"min_percentile": {
"anyOf": [
{
"exclusiveMaximum": 100,
"minimum": 0,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Minimum percentile for clipping.\n\nExclusive with [min][].\n\nIn range [0, 100).",
"title": "Min Percentile"
},
"max": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Maximum value for clipping.\n\nExclusive with `max_percentile`.",
"title": "Max"
},
"max_percentile": {
"anyOf": [
{
"exclusiveMinimum": 1,
"maximum": 100,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Maximum percentile for clipping.\n\nExclusive with `max`.\n\nIn range (1, 100].",
"title": "Max Percentile"
},
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to determine percentiles jointly,\n\ni.e. axes to reduce to compute min/max from `min_percentile`/`max_percentile`.\nFor example to clip 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape with clipped values per channel, specify `axes=('batch', 'x', 'y')`.\nTo clip samples independently, leave out the 'batch' axis.\n\nOnly valid if `min_percentile` and/or `max_percentile` are set.\n\nDefault: Compute percentiles over all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
}
},
"title": "model.v0_5.ClipKwargs",
"type": "object"
},
"EnsureDtypeDescr": {
"additionalProperties": false,
"description": "Cast the tensor data type to `EnsureDtypeKwargs.dtype` (if not matching).\n\nThis can for example be used to ensure the inner neural network model gets a\ndifferent input tensor data type than the fully described bioimage.io model does.\n\nExamples:\n The described bioimage.io model (incl. preprocessing) accepts any\n float32-compatible tensor, normalizes it with percentiles and clipping and then\n casts it to uint8, which is what the neural network in this example expects.\n - in YAML\n ```yaml\n inputs:\n - data:\n type: float32 # described bioimage.io model is compatible with any float32 input tensor\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n - id: ensure_dtype # the neural network of the model requires uint8\n kwargs:\n dtype: uint8\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(kwargs=ClipKwargs(min=0.0, max=1.0)),\n ... EnsureDtypeDescr(kwargs=EnsureDtypeKwargs(dtype=\"uint8\")),\n ... ]",
"properties": {
"id": {
"const": "ensure_dtype",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/EnsureDtypeKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.EnsureDtypeDescr",
"type": "object"
},
"EnsureDtypeKwargs": {
"additionalProperties": false,
"description": "key word arguments for [EnsureDtypeDescr][]",
"properties": {
"dtype": {
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"title": "Dtype",
"type": "string"
}
},
"required": [
"dtype"
],
"title": "model.v0_5.EnsureDtypeKwargs",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"FixedZeroMeanUnitVarianceAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
"properties": {
"mean": {
"description": "The mean value(s) to normalize with.",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Mean",
"type": "array"
},
"std": {
"description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
"items": {
"minimum": 1e-06,
"type": "number"
},
"minItems": 1,
"title": "Std",
"type": "array"
},
"axis": {
"description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
"examples": [
"channel",
"index"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"mean",
"std",
"axis"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
"type": "object"
},
"FixedZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract a given mean and divide by the standard deviation.\n\nNormalize with fixed, precomputed values for\n`FixedZeroMeanUnitVarianceKwargs.mean` and `FixedZeroMeanUnitVarianceKwargs.std`\nUse `FixedZeroMeanUnitVarianceAlongAxisKwargs` for independent scaling along given\naxes.\n\nExamples:\n1. scalar value for whole tensor\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n mean: 103.5\n std: 13.7\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7)\n ... )]\n\n2. independently along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n axis: channel\n mean: [101.5, 102.5, 103.5]\n std: [11.7, 12.7, 13.7]\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... mean=[101.5, 102.5, 103.5],\n ... std=[11.7, 12.7, 13.7],\n ... )\n ... )]",
"properties": {
"id": {
"const": "fixed_zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceKwargs"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceDescr",
"type": "object"
},
"FixedZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
"properties": {
"mean": {
"description": "The mean value to normalize with.",
"title": "Mean",
"type": "number"
},
"std": {
"description": "The standard deviation value to normalize with.",
"minimum": 1e-06,
"title": "Std",
"type": "number"
}
},
"required": [
"mean",
"std"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
"type": "object"
},
"IndexInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.IndexInputAxis",
"type": "object"
},
"IntervalOrRatioDataDescr": {
"additionalProperties": false,
"properties": {
"type": {
"default": "float32",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64"
],
"examples": [
"float32",
"float64",
"uint8",
"uint16"
],
"title": "Type",
"type": "string"
},
"range": {
"default": [
null,
null
],
"description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
"maxItems": 2,
"minItems": 2,
"prefixItems": [
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
},
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
}
],
"title": "Range",
"type": "array"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
}
],
"default": "arbitrary unit",
"title": "Unit"
},
"scale": {
"default": 1.0,
"description": "Scale for data on an interval (or ratio) scale.",
"title": "Scale",
"type": "number"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Offset for data on a ratio scale.",
"title": "Offset"
}
},
"title": "model.v0_5.IntervalOrRatioDataDescr",
"type": "object"
},
"NominalOrOrdinalDataDescr": {
"additionalProperties": false,
"properties": {
"values": {
"anyOf": [
{
"items": {
"type": "integer"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "boolean"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "string"
},
"minItems": 1,
"type": "array"
}
],
"description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
"title": "Values"
},
"type": {
"default": "uint8",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"examples": [
"float32",
"uint8",
"uint16",
"int64",
"bool"
],
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
}
},
"required": [
"values"
],
"title": "model.v0_5.NominalOrOrdinalDataDescr",
"type": "object"
},
"ParameterizedSize": {
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"ScaleLinearAlongAxisKwargs": {
"additionalProperties": false,
"description": "Key word arguments for [ScaleLinearDescr][]",
"properties": {
"axis": {
"description": "The axis of gain and offset values.",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"gain": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 0.0,
"description": "additive term",
"title": "Offset"
}
},
"required": [
"axis"
],
"title": "model.v0_5.ScaleLinearAlongAxisKwargs",
"type": "object"
},
"ScaleLinearDescr": {
"additionalProperties": false,
"description": "Fixed linear scaling.\n\nExamples:\n 1. Scale with scalar gain and offset\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n gain: 2.0\n offset: 3.0\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0))\n ... ]\n\n 2. Independent scaling along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n axis: 'channel'\n gain: [1.0, 2.0, 3.0]\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(\n ... kwargs=ScaleLinearAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... gain=[1.0, 2.0, 3.0],\n ... )\n ... )\n ... ]",
"properties": {
"id": {
"const": "scale_linear",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/ScaleLinearKwargs"
},
{
"$ref": "#/$defs/ScaleLinearAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ScaleLinearDescr",
"type": "object"
},
"ScaleLinearKwargs": {
"additionalProperties": false,
"description": "Key word arguments for [ScaleLinearDescr][]",
"properties": {
"gain": {
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain",
"type": "number"
},
"offset": {
"default": 0.0,
"description": "additive term",
"title": "Offset",
"type": "number"
}
},
"title": "model.v0_5.ScaleLinearKwargs",
"type": "object"
},
"ScaleRangeDescr": {
"additionalProperties": false,
"description": "Scale with percentiles.\n\nExamples:\n1. Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... )\n ... ]\n\n 2. Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: scale_range\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(\n ... kwargs=ClipKwargs(\n ... min=0.0,\n ... max=1.0,\n ... )\n ... ),\n ... ]",
"properties": {
"id": {
"const": "scale_range",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleRangeKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ScaleRangeDescr",
"type": "object"
},
"ScaleRangeKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ScaleRangeDescr][]\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"min_percentile": {
"default": 0.0,
"description": "The lower percentile used to determine the value to align with zero.",
"exclusiveMaximum": 100,
"minimum": 0,
"title": "Min Percentile",
"type": "number"
},
"max_percentile": {
"default": 100.0,
"description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
"exclusiveMinimum": 1,
"maximum": 100,
"title": "Max Percentile",
"type": "number"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
},
"reference_tensor": {
"anyOf": [
{
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Tensor ID to compute the percentiles from. Default: The tensor itself.\nFor any tensor in `inputs` only input tensor references are allowed.",
"title": "Reference Tensor"
}
},
"title": "model.v0_5.ScaleRangeKwargs",
"type": "object"
},
"SigmoidDescr": {
"additionalProperties": false,
"description": "The logistic sigmoid function, a.k.a. expit function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: sigmoid\n ```\n- in Python:\n >>> postprocessing = [SigmoidDescr()]",
"properties": {
"id": {
"const": "sigmoid",
"title": "Id",
"type": "string"
}
},
"required": [
"id"
],
"title": "model.v0_5.SigmoidDescr",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
},
"SoftmaxDescr": {
"additionalProperties": false,
"description": "The softmax function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: softmax\n kwargs:\n axis: channel\n ```\n- in Python:\n >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId(\"channel\")))]",
"properties": {
"id": {
"const": "softmax",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/SoftmaxKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.SoftmaxDescr",
"type": "object"
},
"SoftmaxKwargs": {
"additionalProperties": false,
"description": "key word arguments for [SoftmaxDescr][]",
"properties": {
"axis": {
"default": "channel",
"description": "The axis to apply the softmax function along.\nNote:\n Defaults to 'channel' axis\n (which may not exist, in which case\n a different axis id has to be specified).",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"title": "model.v0_5.SoftmaxKwargs",
"type": "object"
},
"SpaceInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceInputAxis",
"type": "object"
},
"TimeInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeInputAxis",
"type": "object"
},
"ZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract mean and divide by variance.\n\nExamples:\n Subtract tensor mean and variance\n - in YAML\n ```yaml\n preprocessing:\n - id: zero_mean_unit_variance\n ```\n - in Python\n >>> preprocessing = [ZeroMeanUnitVarianceDescr()]",
"properties": {
"id": {
"const": "zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ZeroMeanUnitVarianceDescr",
"type": "object"
},
"ZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ZeroMeanUnitVarianceDescr][]",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"id": {
"default": "input",
"description": "Input tensor id.\nNo duplicates are allowed across all inputs and outputs.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"description": {
"default": "",
"description": "free text description",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"axes": {
"description": "tensor axes",
"items": {
"discriminator": {
"mapping": {
"batch": "#/$defs/BatchAxis",
"channel": "#/$defs/ChannelAxis",
"index": "#/$defs/IndexInputAxis",
"space": "#/$defs/SpaceInputAxis",
"time": "#/$defs/TimeInputAxis"
},
"propertyName": "type"
},
"oneOf": [
{
"$ref": "#/$defs/BatchAxis"
},
{
"$ref": "#/$defs/ChannelAxis"
},
{
"$ref": "#/$defs/IndexInputAxis"
},
{
"$ref": "#/$defs/TimeInputAxis"
},
{
"$ref": "#/$defs/SpaceInputAxis"
}
]
},
"minItems": 1,
"title": "Axes",
"type": "array"
},
"test_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
},
"sample_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
},
"data": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
},
{
"items": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
}
]
},
"minItems": 1,
"type": "array"
}
],
"default": {
"type": "float32",
"range": [
null,
null
],
"unit": "arbitrary unit",
"scale": 1.0,
"offset": null
},
"description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
"title": "Data"
},
"optional": {
"default": false,
"description": "indicates that this tensor may be `None`",
"title": "Optional",
"type": "boolean"
},
"preprocessing": {
"description": "Description of how this input should be preprocessed.\n\nnotes:\n- If preprocessing does not start with an 'ensure_dtype' entry, it is added\n to ensure an input tensor's data type matches the input tensor's data description.\n- If preprocessing does not end with an 'ensure_dtype' or 'binarize' entry, an\n 'ensure_dtype' step is added to ensure preprocessing steps are not unintentionally\n changing the data type.",
"items": {
"discriminator": {
"mapping": {
"binarize": "#/$defs/BinarizeDescr",
"clip": "#/$defs/ClipDescr",
"ensure_dtype": "#/$defs/EnsureDtypeDescr",
"fixed_zero_mean_unit_variance": "#/$defs/FixedZeroMeanUnitVarianceDescr",
"scale_linear": "#/$defs/ScaleLinearDescr",
"scale_range": "#/$defs/ScaleRangeDescr",
"sigmoid": "#/$defs/SigmoidDescr",
"softmax": "#/$defs/SoftmaxDescr",
"zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
},
"propertyName": "id"
},
"oneOf": [
{
"$ref": "#/$defs/BinarizeDescr"
},
{
"$ref": "#/$defs/ClipDescr"
},
{
"$ref": "#/$defs/EnsureDtypeDescr"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceDescr"
},
{
"$ref": "#/$defs/ScaleLinearDescr"
},
{
"$ref": "#/$defs/ScaleRangeDescr"
},
{
"$ref": "#/$defs/SigmoidDescr"
},
{
"$ref": "#/$defs/SoftmaxDescr"
},
{
"$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
}
]
},
"title": "Preprocessing",
"type": "array"
}
},
"required": [
"axes"
],
"title": "model.v0_5.InputTensorDescr",
"type": "object"
}
Fields:
-
description(Annotated[str, MaxLen(128)]) -
axes(NotEmpty[Sequence[IO_AxisT]]) -
test_tensor(FAIR[Optional[FileDescr_]]) -
sample_tensor(FAIR[Optional[FileDescr_]]) -
data(Union[TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]]) -
id(TensorId) -
optional(bool) -
preprocessing(List[PreprocessingDescr])
Validators:
-
_validate_axes→axes -
_validate_sample_tensor -
_check_data_type_across_channels→data -
_check_data_matches_channelaxis -
_validate_preprocessing_kwargs
data
pydantic-field
¤
data: Union[
TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]
]
Description of the tensor's data values, optionally per channel.
If specified per channel, the data type needs to match across channels.
dtype
property
¤
dtype: Literal[
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool",
]
dtype as specified under data.type or data[i].type
id
pydantic-field
¤
id: TensorId
Input tensor id. No duplicates are allowed across all inputs and outputs.
preprocessing
pydantic-field
¤
preprocessing: List[PreprocessingDescr]
Description of how this input should be preprocessed.
notes: - If preprocessing does not start with an 'ensure_dtype' entry, it is added to ensure an input tensor's data type matches the input tensor's data description. - If preprocessing does not end with an 'ensure_dtype' or 'binarize' entry, an 'ensure_dtype' step is added to ensure preprocessing steps are not unintentionally changing the data type.
sample_tensor
pydantic-field
¤
sample_tensor: FAIR[Optional[FileDescr_]] = None
A sample tensor to illustrate a possible input/output for the model,
The sample image primarily serves to inform a human user about an example use case
and is typically stored as .hdf5, .png or .tiff.
It has to be readable by the imageio library
(numpy's .npy format is not supported).
The image dimensionality has to match the number of axes specified in this tensor description.
test_tensor
pydantic-field
¤
test_tensor: FAIR[Optional[FileDescr_]] = None
An example tensor to use for testing. Using the model with the test input tensors is expected to yield the test output tensors. Each test tensor has be a an ndarray in the numpy.lib file format. The file extension must be '.npy'.
get_axis_sizes_for_array
¤
get_axis_sizes_for_array(
array: NDArray[Any],
) -> Dict[AxisId, int]
Source code in src/bioimageio/spec/model/v0_5.py
1755 1756 1757 1758 1759 1760 1761 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
IntervalOrRatioDataDescr
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"type": {
"default": "float32",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64"
],
"examples": [
"float32",
"float64",
"uint8",
"uint16"
],
"title": "Type",
"type": "string"
},
"range": {
"default": [
null,
null
],
"description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
"maxItems": 2,
"minItems": 2,
"prefixItems": [
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
},
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
}
],
"title": "Range",
"type": "array"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
}
],
"default": "arbitrary unit",
"title": "Unit"
},
"scale": {
"default": 1.0,
"description": "Scale for data on an interval (or ratio) scale.",
"title": "Scale",
"type": "number"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Offset for data on a ratio scale.",
"title": "Offset"
}
},
"title": "model.v0_5.IntervalOrRatioDataDescr",
"type": "object"
}
Fields:
-
type(Annotated[IntervalOrRatioDType, Field(examples=['float32', 'float64', 'uint8', 'uint16'])]) -
range(Tuple[Optional[float], Optional[float]]) -
unit(Union[Literal['arbitrary unit'], SiUnit]) -
scale(float) -
offset(Optional[float])
Validators:
-
_replace_inf
range
pydantic-field
¤
range: Tuple[Optional[float], Optional[float]] = (
None,
None,
)
Tuple (minimum, maximum) specifying the allowed range of the data in this tensor.
None corresponds to min/max of what can be expressed by type.
type
pydantic-field
¤
type: Annotated[
IntervalOrRatioDType,
Field(
examples=["float32", "float64", "uint8", "uint16"]
),
] = "float32"
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
KerasHdf5WeightsDescr
pydantic-model
¤
Bases: WeightsEntryDescrBase
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "TensorFlow version used to create these weights."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.KerasHdf5WeightsDescr",
"type": "object"
}
Fields:
-
source(Annotated[FileSource, AfterValidator(wo_special_file_name)]) -
sha256(Optional[Sha256]) -
authors(Optional[List[Author]]) -
parent(Annotated[Optional[WeightsFormat], Field(examples=['pytorch_state_dict'])]) -
comment(str) -
tensorflow_version(Version)
Validators:
-
_validate_sha256 -
_validate
authors
pydantic-field
¤
authors: Optional[List[Author]] = None
Authors
Either the person(s) that have trained this model resulting in the original weights file.
(If this is the initial weights entry, i.e. it does not have a parent)
Or the person(s) who have converted the weights to this weights format.
(If this is a child weight, i.e. it has a parent field)
comment
pydantic-field
¤
comment: str = ''
A comment about this weights entry, for example how these weights were created.
parent
pydantic-field
¤
parent: Annotated[
Optional[WeightsFormat],
Field(examples=["pytorch_state_dict"]),
] = None
The source weights these weights were converted from.
For example, if a model's weights were converted from the pytorch_state_dict format to torchscript,
The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights.
All weight entries except one (the initial set of weights resulting from training the model),
need to have this field.
source
pydantic-field
¤
source: Annotated[
FileSource, AfterValidator(wo_special_file_name)
]
Source of the weights file.
tensorflow_version
pydantic-field
¤
tensorflow_version: Version
TensorFlow version used to create these weights.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
310 311 312 313 314 315 316 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
302 303 304 305 306 307 308 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | |
LicenseId
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.LicenseId[LicenseId]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.LicenseId
click bioimageio.spec.model.v0_5.LicenseId href "" "bioimageio.spec.model.v0_5.LicenseId"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
Methods:
| Name | Description |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
Attributes:
| Name | Type | Description |
|---|---|---|
root_model |
Type[RootModel[Any]]
|
the pydantic root model to validate the string |
root_model
class-attribute
¤
the pydantic root model to validate the string
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
LinkedDataset
pydantic-model
¤
Bases: LinkedResourceBase
Reference to a bioimage.io dataset.
Show JSON schema:
{
"$defs": {
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"description": "Reference to a bioimage.io dataset.",
"properties": {
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the linked resource following SemVer 2.0."
},
"id": {
"description": "A valid dataset `id` from the bioimage.io collection.",
"minLength": 1,
"title": "DatasetId",
"type": "string"
}
},
"required": [
"id"
],
"title": "dataset.v0_3.LinkedDataset",
"type": "object"
}
Fields:
Validators:
-
_remove_version_number
version
pydantic-field
¤
version: Optional[Version] = None
The version of the linked resource following SemVer 2.0.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
LinkedModel
pydantic-model
¤
Bases: LinkedResourceBase
Reference to a bioimage.io model.
Show JSON schema:
{
"$defs": {
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"description": "Reference to a bioimage.io model.",
"properties": {
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the linked resource following SemVer 2.0."
},
"id": {
"description": "A valid model `id` from the bioimage.io collection.",
"minLength": 1,
"title": "ModelId",
"type": "string"
}
},
"required": [
"id"
],
"title": "model.v0_5.LinkedModel",
"type": "object"
}
Fields:
Validators:
-
_remove_version_number
version
pydantic-field
¤
version: Optional[Version] = None
The version of the linked resource following SemVer 2.0.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
LinkedResource
pydantic-model
¤
Bases: LinkedResourceBase
Reference to a bioimage.io resource
Show JSON schema:
{
"$defs": {
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"description": "Reference to a bioimage.io resource",
"properties": {
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the linked resource following SemVer 2.0."
},
"id": {
"description": "A valid resource `id` from the official bioimage.io collection.",
"minLength": 1,
"title": "ResourceId",
"type": "string"
}
},
"required": [
"id"
],
"title": "generic.v0_3.LinkedResource",
"type": "object"
}
Fields:
-
version(Optional[Version]) -
id(ResourceId)
Validators:
-
_remove_version_number
version
pydantic-field
¤
version: Optional[Version] = None
The version of the linked resource following SemVer 2.0.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
Maintainer
pydantic-model
¤
Bases: _Maintainer_v0_2
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Name"
},
"github_user": {
"title": "Github User",
"type": "string"
}
},
"required": [
"github_user"
],
"title": "generic.v0_3.Maintainer",
"type": "object"
}
Fields:
-
affiliation(Optional[str]) -
email(Optional[EmailStr]) -
orcid(Annotated[Optional[OrcidId], Field(examples=['0000-0001-2345-6789'])]) -
name(Optional[Annotated[str, Predicate(_has_no_slash)]]) -
github_user(str)
Validators:
orcid
pydantic-field
¤
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
validate_github_user
pydantic-validator
¤
validate_github_user(value: str)
Source code in src/bioimageio/spec/generic/v0_3.py
140 141 142 | |
ModelDescr
pydantic-model
¤
Bases: GenericModelDescrBase
Specification of the fields used in a bioimage.io-compliant RDF to describe AI models with pretrained weights. These fields are typically stored in a YAML file which we call a model resource description file (model RDF).
Show JSON schema:
{
"$defs": {
"ArchitectureFromFileDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Architecture source file",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
}
},
"required": [
"source",
"callable"
],
"title": "model.v0_5.ArchitectureFromFileDescr",
"type": "object"
},
"ArchitectureFromLibraryDescr": {
"additionalProperties": false,
"properties": {
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
},
"import_from": {
"description": "Where to import the callable from, i.e. `from <import_from> import <callable>`",
"title": "Import From",
"type": "string"
}
},
"required": [
"callable",
"import_from"
],
"title": "model.v0_5.ArchitectureFromLibraryDescr",
"type": "object"
},
"AttachmentsDescr": {
"additionalProperties": true,
"properties": {
"files": {
"description": "File attachments",
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Files",
"type": "array"
}
},
"title": "generic.v0_2.AttachmentsDescr",
"type": "object"
},
"BadgeDescr": {
"additionalProperties": false,
"description": "A custom badge",
"properties": {
"label": {
"description": "badge label to display on hover",
"examples": [
"Open in Colab"
],
"title": "Label",
"type": "string"
},
"icon": {
"anyOf": [
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "badge icon (included in bioimage.io package if not a URL)",
"examples": [
"https://colab.research.google.com/assets/colab-badge.svg"
],
"title": "Icon"
},
"url": {
"description": "target URL",
"examples": [
"https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb"
],
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
}
},
"required": [
"label",
"url"
],
"title": "generic.v0_2.BadgeDescr",
"type": "object"
},
"BatchAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "batch",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "batch",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"const": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
"title": "Size"
}
},
"required": [
"type"
],
"title": "model.v0_5.BatchAxis",
"type": "object"
},
"BiasRisksLimitations": {
"additionalProperties": true,
"description": "Known biases, risks, technical limitations, and recommendations for model use.",
"properties": {
"known_biases": {
"default": "In general bioimage models may suffer from biases caused by:\n\n- Imaging protocol dependencies\n- Use of a specific cell type\n- Species-specific training data limitations\n\n",
"description": "Biases in training data or model behavior.",
"title": "Known Biases",
"type": "string"
},
"risks": {
"default": "Common risks in bioimage analysis include:\n\n- Erroneously assuming generalization to unseen experimental conditions\n- Trusting (overconfident) model outputs without validation\n- Misinterpretation of results\n\n",
"description": "Potential risks in the context of bioimage analysis.",
"title": "Risks",
"type": "string"
},
"limitations": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Technical limitations and failure modes.",
"title": "Limitations"
},
"recommendations": {
"default": "Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.",
"description": "Mitigation strategies regarding `known_biases`, `risks`, and `limitations`, as well as applicable best practices.\n\nConsider:\n- How to use a validation dataset?\n- How to manually validate?\n- Feasibility of domain adaptation for different experimental setups?",
"title": "Recommendations",
"type": "string"
}
},
"title": "model.v0_5.BiasRisksLimitations",
"type": "object"
},
"BinarizeAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for [BinarizeDescr][]",
"properties": {
"threshold": {
"description": "The fixed threshold values along `axis`",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Threshold",
"type": "array"
},
"axis": {
"description": "The `threshold` axis",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"threshold",
"axis"
],
"title": "model.v0_5.BinarizeAlongAxisKwargs",
"type": "object"
},
"BinarizeDescr": {
"additionalProperties": false,
"description": "Binarize the tensor with a fixed threshold.\n\nValues above [BinarizeKwargs.threshold][]/[BinarizeAlongAxisKwargs.threshold][]\nwill be set to one, values below the threshold to zero.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: binarize\n kwargs:\n axis: 'channel'\n threshold: [0.25, 0.5, 0.75]\n ```\n- in Python:\n >>> postprocessing = [BinarizeDescr(\n ... kwargs=BinarizeAlongAxisKwargs(\n ... axis=AxisId('channel'),\n ... threshold=[0.25, 0.5, 0.75],\n ... )\n ... )]",
"properties": {
"id": {
"const": "binarize",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/BinarizeKwargs"
},
{
"$ref": "#/$defs/BinarizeAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.BinarizeDescr",
"type": "object"
},
"BinarizeKwargs": {
"additionalProperties": false,
"description": "key word arguments for [BinarizeDescr][]",
"properties": {
"threshold": {
"description": "The fixed threshold",
"title": "Threshold",
"type": "number"
}
},
"required": [
"threshold"
],
"title": "model.v0_5.BinarizeKwargs",
"type": "object"
},
"ChannelAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "channel",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "channel",
"title": "Type",
"type": "string"
},
"channel_names": {
"items": {
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"minItems": 1,
"title": "Channel Names",
"type": "array"
}
},
"required": [
"type",
"channel_names"
],
"title": "model.v0_5.ChannelAxis",
"type": "object"
},
"ClipDescr": {
"additionalProperties": false,
"description": "Set tensor values below min to min and above max to max.\n\nSee `ScaleRangeDescr` for examples.",
"properties": {
"id": {
"const": "clip",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ClipKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ClipDescr",
"type": "object"
},
"ClipKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ClipDescr][]",
"properties": {
"min": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Minimum value for clipping.\n\nExclusive with [min_percentile][]",
"title": "Min"
},
"min_percentile": {
"anyOf": [
{
"exclusiveMaximum": 100,
"minimum": 0,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Minimum percentile for clipping.\n\nExclusive with [min][].\n\nIn range [0, 100).",
"title": "Min Percentile"
},
"max": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Maximum value for clipping.\n\nExclusive with `max_percentile`.",
"title": "Max"
},
"max_percentile": {
"anyOf": [
{
"exclusiveMinimum": 1,
"maximum": 100,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Maximum percentile for clipping.\n\nExclusive with `max`.\n\nIn range (1, 100].",
"title": "Max Percentile"
},
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to determine percentiles jointly,\n\ni.e. axes to reduce to compute min/max from `min_percentile`/`max_percentile`.\nFor example to clip 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape with clipped values per channel, specify `axes=('batch', 'x', 'y')`.\nTo clip samples independently, leave out the 'batch' axis.\n\nOnly valid if `min_percentile` and/or `max_percentile` are set.\n\nDefault: Compute percentiles over all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
}
},
"title": "model.v0_5.ClipKwargs",
"type": "object"
},
"DataDependentSize": {
"additionalProperties": false,
"properties": {
"min": {
"default": 1,
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"max": {
"anyOf": [
{
"exclusiveMinimum": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Max"
}
},
"title": "model.v0_5.DataDependentSize",
"type": "object"
},
"Datetime": {
"description": "Timestamp in [ISO 8601](#https://en.wikipedia.org/wiki/ISO_8601) format\nwith a few restrictions listed [here](https://docs.python.org/3/library/datetime.html#datetime.datetime.fromisoformat).",
"format": "date-time",
"title": "Datetime",
"type": "string"
},
"EnsureDtypeDescr": {
"additionalProperties": false,
"description": "Cast the tensor data type to `EnsureDtypeKwargs.dtype` (if not matching).\n\nThis can for example be used to ensure the inner neural network model gets a\ndifferent input tensor data type than the fully described bioimage.io model does.\n\nExamples:\n The described bioimage.io model (incl. preprocessing) accepts any\n float32-compatible tensor, normalizes it with percentiles and clipping and then\n casts it to uint8, which is what the neural network in this example expects.\n - in YAML\n ```yaml\n inputs:\n - data:\n type: float32 # described bioimage.io model is compatible with any float32 input tensor\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n - id: ensure_dtype # the neural network of the model requires uint8\n kwargs:\n dtype: uint8\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(kwargs=ClipKwargs(min=0.0, max=1.0)),\n ... EnsureDtypeDescr(kwargs=EnsureDtypeKwargs(dtype=\"uint8\")),\n ... ]",
"properties": {
"id": {
"const": "ensure_dtype",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/EnsureDtypeKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.EnsureDtypeDescr",
"type": "object"
},
"EnsureDtypeKwargs": {
"additionalProperties": false,
"description": "key word arguments for [EnsureDtypeDescr][]",
"properties": {
"dtype": {
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"title": "Dtype",
"type": "string"
}
},
"required": [
"dtype"
],
"title": "model.v0_5.EnsureDtypeKwargs",
"type": "object"
},
"EnvironmentalImpact": {
"additionalProperties": true,
"description": "Environmental considerations for model training and deployment.\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
"properties": {
"hardware_type": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "GPU/CPU specifications",
"title": "Hardware Type"
},
"hours_used": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Total compute hours",
"title": "Hours Used"
},
"cloud_provider": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "If applicable",
"title": "Cloud Provider"
},
"compute_region": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Geographic location",
"title": "Compute Region"
},
"co2_emitted": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "kg CO2 equivalent\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).",
"title": "Co2 Emitted"
}
},
"title": "model.v0_5.EnvironmentalImpact",
"type": "object"
},
"Evaluation": {
"additionalProperties": true,
"properties": {
"model_id": {
"anyOf": [
{
"minLength": 1,
"title": "ModelId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Model being evaluated.",
"title": "Model Id"
},
"dataset_id": {
"description": "Dataset used for evaluation.",
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
"dataset_source": {
"description": "Source of the dataset.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
"dataset_role": {
"description": "Role of the dataset used for evaluation.\n\n- `train`: dataset was (part of) the training data\n- `validation`: dataset was (part of) the validation data used during training, e.g. used for model selection or hyperparameter tuning\n- `test`: dataset was (part of) the designated test data; not used during training or validation, but acquired from the same source/distribution as training data\n- `independent`: dataset is entirely independent test data; not used during training or validation, and acquired from a different source/distribution than training data\n- `unknown`: role of the dataset is unknown; choose this if you are not certain if (a subset) of the data was seen by the model during training.",
"enum": [
"train",
"validation",
"test",
"independent",
"unknown"
],
"title": "Dataset Role",
"type": "string"
},
"sample_count": {
"description": "Number of evaluated samples.",
"title": "Sample Count",
"type": "integer"
},
"evaluation_factors": {
"description": "(Abbreviations of) each evaluation factor.\n\nEvaluation factors are criteria along which model performance is evaluated, e.g. different image conditions\nlike 'low SNR', 'high cell density', or different biological conditions like 'cell type A', 'cell type B'.\nAn 'overall' factor may be included to summarize performance across all conditions.",
"items": {
"maxLength": 16,
"type": "string"
},
"title": "Evaluation Factors",
"type": "array"
},
"evaluation_factors_long": {
"description": "Descriptions (long form) of each evaluation factor.",
"items": {
"type": "string"
},
"title": "Evaluation Factors Long",
"type": "array"
},
"metrics": {
"description": "(Abbreviations of) metrics used for evaluation.",
"items": {
"maxLength": 16,
"type": "string"
},
"title": "Metrics",
"type": "array"
},
"metrics_long": {
"description": "Description of each metric used.",
"items": {
"type": "string"
},
"title": "Metrics Long",
"type": "array"
},
"results": {
"description": "Results for each metric (rows; outer list) and each evaluation factor (columns; inner list).",
"items": {
"items": {
"anyOf": [
{
"type": "string"
},
{
"type": "number"
},
{
"type": "integer"
}
]
},
"type": "array"
},
"title": "Results",
"type": "array"
},
"results_summary": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Interpretation of results for general audience.\n\nConsider:\n - Overall model performance\n - Comparison to existing methods\n - Limitations and areas for improvement",
"title": "Results Summary"
}
},
"required": [
"dataset_id",
"dataset_source",
"dataset_role",
"sample_count",
"evaluation_factors",
"evaluation_factors_long",
"metrics",
"metrics_long",
"results"
],
"title": "model.v0_5.Evaluation",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"FixedZeroMeanUnitVarianceAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
"properties": {
"mean": {
"description": "The mean value(s) to normalize with.",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Mean",
"type": "array"
},
"std": {
"description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
"items": {
"minimum": 1e-06,
"type": "number"
},
"minItems": 1,
"title": "Std",
"type": "array"
},
"axis": {
"description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
"examples": [
"channel",
"index"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"mean",
"std",
"axis"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
"type": "object"
},
"FixedZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract a given mean and divide by the standard deviation.\n\nNormalize with fixed, precomputed values for\n`FixedZeroMeanUnitVarianceKwargs.mean` and `FixedZeroMeanUnitVarianceKwargs.std`\nUse `FixedZeroMeanUnitVarianceAlongAxisKwargs` for independent scaling along given\naxes.\n\nExamples:\n1. scalar value for whole tensor\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n mean: 103.5\n std: 13.7\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7)\n ... )]\n\n2. independently along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n axis: channel\n mean: [101.5, 102.5, 103.5]\n std: [11.7, 12.7, 13.7]\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... mean=[101.5, 102.5, 103.5],\n ... std=[11.7, 12.7, 13.7],\n ... )\n ... )]",
"properties": {
"id": {
"const": "fixed_zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceKwargs"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceDescr",
"type": "object"
},
"FixedZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
"properties": {
"mean": {
"description": "The mean value to normalize with.",
"title": "Mean",
"type": "number"
},
"std": {
"description": "The standard deviation value to normalize with.",
"minimum": 1e-06,
"title": "Std",
"type": "number"
}
},
"required": [
"mean",
"std"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
"type": "object"
},
"IndexInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.IndexInputAxis",
"type": "object"
},
"IndexOutputAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
},
{
"$ref": "#/$defs/DataDependentSize"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset ([SizeReference][])\n- data dependent size using [DataDependentSize][] (size is only known after model inference)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
}
},
"required": [
"type",
"size"
],
"title": "model.v0_5.IndexOutputAxis",
"type": "object"
},
"InputTensorDescr": {
"additionalProperties": false,
"properties": {
"id": {
"default": "input",
"description": "Input tensor id.\nNo duplicates are allowed across all inputs and outputs.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"description": {
"default": "",
"description": "free text description",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"axes": {
"description": "tensor axes",
"items": {
"discriminator": {
"mapping": {
"batch": "#/$defs/BatchAxis",
"channel": "#/$defs/ChannelAxis",
"index": "#/$defs/IndexInputAxis",
"space": "#/$defs/SpaceInputAxis",
"time": "#/$defs/TimeInputAxis"
},
"propertyName": "type"
},
"oneOf": [
{
"$ref": "#/$defs/BatchAxis"
},
{
"$ref": "#/$defs/ChannelAxis"
},
{
"$ref": "#/$defs/IndexInputAxis"
},
{
"$ref": "#/$defs/TimeInputAxis"
},
{
"$ref": "#/$defs/SpaceInputAxis"
}
]
},
"minItems": 1,
"title": "Axes",
"type": "array"
},
"test_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
},
"sample_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
},
"data": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
},
{
"items": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
}
]
},
"minItems": 1,
"type": "array"
}
],
"default": {
"type": "float32",
"range": [
null,
null
],
"unit": "arbitrary unit",
"scale": 1.0,
"offset": null
},
"description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
"title": "Data"
},
"optional": {
"default": false,
"description": "indicates that this tensor may be `None`",
"title": "Optional",
"type": "boolean"
},
"preprocessing": {
"description": "Description of how this input should be preprocessed.\n\nnotes:\n- If preprocessing does not start with an 'ensure_dtype' entry, it is added\n to ensure an input tensor's data type matches the input tensor's data description.\n- If preprocessing does not end with an 'ensure_dtype' or 'binarize' entry, an\n 'ensure_dtype' step is added to ensure preprocessing steps are not unintentionally\n changing the data type.",
"items": {
"discriminator": {
"mapping": {
"binarize": "#/$defs/BinarizeDescr",
"clip": "#/$defs/ClipDescr",
"ensure_dtype": "#/$defs/EnsureDtypeDescr",
"fixed_zero_mean_unit_variance": "#/$defs/FixedZeroMeanUnitVarianceDescr",
"scale_linear": "#/$defs/ScaleLinearDescr",
"scale_range": "#/$defs/ScaleRangeDescr",
"sigmoid": "#/$defs/SigmoidDescr",
"softmax": "#/$defs/SoftmaxDescr",
"zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
},
"propertyName": "id"
},
"oneOf": [
{
"$ref": "#/$defs/BinarizeDescr"
},
{
"$ref": "#/$defs/ClipDescr"
},
{
"$ref": "#/$defs/EnsureDtypeDescr"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceDescr"
},
{
"$ref": "#/$defs/ScaleLinearDescr"
},
{
"$ref": "#/$defs/ScaleRangeDescr"
},
{
"$ref": "#/$defs/SigmoidDescr"
},
{
"$ref": "#/$defs/SoftmaxDescr"
},
{
"$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
}
]
},
"title": "Preprocessing",
"type": "array"
}
},
"required": [
"axes"
],
"title": "model.v0_5.InputTensorDescr",
"type": "object"
},
"IntervalOrRatioDataDescr": {
"additionalProperties": false,
"properties": {
"type": {
"default": "float32",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64"
],
"examples": [
"float32",
"float64",
"uint8",
"uint16"
],
"title": "Type",
"type": "string"
},
"range": {
"default": [
null,
null
],
"description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
"maxItems": 2,
"minItems": 2,
"prefixItems": [
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
},
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
}
],
"title": "Range",
"type": "array"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
}
],
"default": "arbitrary unit",
"title": "Unit"
},
"scale": {
"default": 1.0,
"description": "Scale for data on an interval (or ratio) scale.",
"title": "Scale",
"type": "number"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Offset for data on a ratio scale.",
"title": "Offset"
}
},
"title": "model.v0_5.IntervalOrRatioDataDescr",
"type": "object"
},
"KerasHdf5WeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "TensorFlow version used to create these weights."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.KerasHdf5WeightsDescr",
"type": "object"
},
"LinkedDataset": {
"additionalProperties": false,
"description": "Reference to a bioimage.io dataset.",
"properties": {
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the linked resource following SemVer 2.0."
},
"id": {
"description": "A valid dataset `id` from the bioimage.io collection.",
"minLength": 1,
"title": "DatasetId",
"type": "string"
}
},
"required": [
"id"
],
"title": "dataset.v0_3.LinkedDataset",
"type": "object"
},
"LinkedModel": {
"additionalProperties": false,
"description": "Reference to a bioimage.io model.",
"properties": {
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the linked resource following SemVer 2.0."
},
"id": {
"description": "A valid model `id` from the bioimage.io collection.",
"minLength": 1,
"title": "ModelId",
"type": "string"
}
},
"required": [
"id"
],
"title": "model.v0_5.LinkedModel",
"type": "object"
},
"NominalOrOrdinalDataDescr": {
"additionalProperties": false,
"properties": {
"values": {
"anyOf": [
{
"items": {
"type": "integer"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "boolean"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "string"
},
"minItems": 1,
"type": "array"
}
],
"description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
"title": "Values"
},
"type": {
"default": "uint8",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"examples": [
"float32",
"uint8",
"uint16",
"int64",
"bool"
],
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
}
},
"required": [
"values"
],
"title": "model.v0_5.NominalOrOrdinalDataDescr",
"type": "object"
},
"OnnxWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"opset_version": {
"description": "ONNX opset version",
"minimum": 7,
"title": "Opset Version",
"type": "integer"
},
"external_data": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "weights.onnx.data"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Source of the external ONNX data file holding the weights.\n(If present **source** holds the ONNX architecture without weights)."
}
},
"required": [
"source",
"opset_version"
],
"title": "model.v0_5.OnnxWeightsDescr",
"type": "object"
},
"OutputTensorDescr": {
"additionalProperties": false,
"properties": {
"id": {
"default": "output",
"description": "Output tensor id.\nNo duplicates are allowed across all inputs and outputs.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"description": {
"default": "",
"description": "free text description",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"axes": {
"description": "tensor axes",
"items": {
"discriminator": {
"mapping": {
"batch": "#/$defs/BatchAxis",
"channel": "#/$defs/ChannelAxis",
"index": "#/$defs/IndexOutputAxis",
"space": {
"oneOf": [
{
"$ref": "#/$defs/SpaceOutputAxis"
},
{
"$ref": "#/$defs/SpaceOutputAxisWithHalo"
}
]
},
"time": {
"oneOf": [
{
"$ref": "#/$defs/TimeOutputAxis"
},
{
"$ref": "#/$defs/TimeOutputAxisWithHalo"
}
]
}
},
"propertyName": "type"
},
"oneOf": [
{
"$ref": "#/$defs/BatchAxis"
},
{
"$ref": "#/$defs/ChannelAxis"
},
{
"$ref": "#/$defs/IndexOutputAxis"
},
{
"oneOf": [
{
"$ref": "#/$defs/TimeOutputAxis"
},
{
"$ref": "#/$defs/TimeOutputAxisWithHalo"
}
]
},
{
"oneOf": [
{
"$ref": "#/$defs/SpaceOutputAxis"
},
{
"$ref": "#/$defs/SpaceOutputAxisWithHalo"
}
]
}
]
},
"minItems": 1,
"title": "Axes",
"type": "array"
},
"test_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
},
"sample_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
},
"data": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
},
{
"items": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
}
]
},
"minItems": 1,
"type": "array"
}
],
"default": {
"type": "float32",
"range": [
null,
null
],
"unit": "arbitrary unit",
"scale": 1.0,
"offset": null
},
"description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
"title": "Data"
},
"postprocessing": {
"description": "Description of how this output should be postprocessed.\n\nnote: `postprocessing` always ends with an 'ensure_dtype' operation.\n If not given this is added to cast to this tensor's `data.type`.",
"items": {
"discriminator": {
"mapping": {
"binarize": "#/$defs/BinarizeDescr",
"clip": "#/$defs/ClipDescr",
"ensure_dtype": "#/$defs/EnsureDtypeDescr",
"fixed_zero_mean_unit_variance": "#/$defs/FixedZeroMeanUnitVarianceDescr",
"scale_linear": "#/$defs/ScaleLinearDescr",
"scale_mean_variance": "#/$defs/ScaleMeanVarianceDescr",
"scale_range": "#/$defs/ScaleRangeDescr",
"sigmoid": "#/$defs/SigmoidDescr",
"softmax": "#/$defs/SoftmaxDescr",
"zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
},
"propertyName": "id"
},
"oneOf": [
{
"$ref": "#/$defs/BinarizeDescr"
},
{
"$ref": "#/$defs/ClipDescr"
},
{
"$ref": "#/$defs/EnsureDtypeDescr"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceDescr"
},
{
"$ref": "#/$defs/ScaleLinearDescr"
},
{
"$ref": "#/$defs/ScaleMeanVarianceDescr"
},
{
"$ref": "#/$defs/ScaleRangeDescr"
},
{
"$ref": "#/$defs/SigmoidDescr"
},
{
"$ref": "#/$defs/SoftmaxDescr"
},
{
"$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
}
]
},
"title": "Postprocessing",
"type": "array"
}
},
"required": [
"axes"
],
"title": "model.v0_5.OutputTensorDescr",
"type": "object"
},
"ParameterizedSize": {
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
},
"PytorchStateDictWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"architecture": {
"anyOf": [
{
"$ref": "#/$defs/ArchitectureFromFileDescr"
},
{
"$ref": "#/$defs/ArchitectureFromLibraryDescr"
}
],
"title": "Architecture"
},
"pytorch_version": {
"$ref": "#/$defs/Version",
"description": "Version of the PyTorch library used.\nIf `architecture.depencencies` is specified it has to include pytorch and any version pinning has to be compatible."
},
"dependencies": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "environment.yaml"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Custom depencies beyond pytorch described in a Conda environment file.\nAllows to specify custom dependencies, see conda docs:\n- [Exporting an environment file across platforms](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#exporting-an-environment-file-across-platforms)\n- [Creating an environment file manually](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually)\n\nThe conda environment file should include pytorch and any version pinning has to be compatible with\n**pytorch_version**."
}
},
"required": [
"source",
"architecture",
"pytorch_version"
],
"title": "model.v0_5.PytorchStateDictWeightsDescr",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"ReproducibilityTolerance": {
"additionalProperties": true,
"description": "Describes what small numerical differences -- if any -- may be tolerated\nin the generated output when executing in different environments.\n\nA tensor element *output* is considered mismatched to the **test_tensor** if\nabs(*output* - **test_tensor**) > **absolute_tolerance** + **relative_tolerance** * abs(**test_tensor**).\n(Internally we call [numpy.testing.assert_allclose](https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html).)\n\nMotivation:\n For testing we can request the respective deep learning frameworks to be as\n reproducible as possible by setting seeds and chosing deterministic algorithms,\n but differences in operating systems, available hardware and installed drivers\n may still lead to numerical differences.",
"properties": {
"relative_tolerance": {
"default": 0.001,
"description": "Maximum relative tolerance of reproduced test tensor.",
"maximum": 0.01,
"minimum": 0,
"title": "Relative Tolerance",
"type": "number"
},
"absolute_tolerance": {
"default": 0.001,
"description": "Maximum absolute tolerance of reproduced test tensor.",
"minimum": 0,
"title": "Absolute Tolerance",
"type": "number"
},
"mismatched_elements_per_million": {
"default": 100,
"description": "Maximum number of mismatched elements/pixels per million to tolerate.",
"maximum": 1000,
"minimum": 0,
"title": "Mismatched Elements Per Million",
"type": "integer"
},
"output_ids": {
"default": [],
"description": "Limits the output tensor IDs these reproducibility details apply to.",
"items": {
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"title": "Output Ids",
"type": "array"
},
"weights_formats": {
"default": [],
"description": "Limits the weights formats these details apply to.",
"items": {
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
"title": "Weights Formats",
"type": "array"
}
},
"title": "model.v0_5.ReproducibilityTolerance",
"type": "object"
},
"RunMode": {
"additionalProperties": false,
"properties": {
"name": {
"anyOf": [
{
"const": "deepimagej",
"type": "string"
},
{
"type": "string"
}
],
"description": "Run mode name",
"title": "Name"
},
"kwargs": {
"additionalProperties": true,
"description": "Run mode specific key word arguments",
"title": "Kwargs",
"type": "object"
}
},
"required": [
"name"
],
"title": "model.v0_4.RunMode",
"type": "object"
},
"ScaleLinearAlongAxisKwargs": {
"additionalProperties": false,
"description": "Key word arguments for [ScaleLinearDescr][]",
"properties": {
"axis": {
"description": "The axis of gain and offset values.",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"gain": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 0.0,
"description": "additive term",
"title": "Offset"
}
},
"required": [
"axis"
],
"title": "model.v0_5.ScaleLinearAlongAxisKwargs",
"type": "object"
},
"ScaleLinearDescr": {
"additionalProperties": false,
"description": "Fixed linear scaling.\n\nExamples:\n 1. Scale with scalar gain and offset\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n gain: 2.0\n offset: 3.0\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0))\n ... ]\n\n 2. Independent scaling along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n axis: 'channel'\n gain: [1.0, 2.0, 3.0]\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(\n ... kwargs=ScaleLinearAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... gain=[1.0, 2.0, 3.0],\n ... )\n ... )\n ... ]",
"properties": {
"id": {
"const": "scale_linear",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/ScaleLinearKwargs"
},
{
"$ref": "#/$defs/ScaleLinearAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ScaleLinearDescr",
"type": "object"
},
"ScaleLinearKwargs": {
"additionalProperties": false,
"description": "Key word arguments for [ScaleLinearDescr][]",
"properties": {
"gain": {
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain",
"type": "number"
},
"offset": {
"default": 0.0,
"description": "additive term",
"title": "Offset",
"type": "number"
}
},
"title": "model.v0_5.ScaleLinearKwargs",
"type": "object"
},
"ScaleMeanVarianceDescr": {
"additionalProperties": false,
"description": "Scale a tensor's data distribution to match another tensor's mean/std.\n`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
"properties": {
"id": {
"const": "scale_mean_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleMeanVarianceKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ScaleMeanVarianceDescr",
"type": "object"
},
"ScaleMeanVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ScaleMeanVarianceKwargs][]",
"properties": {
"reference_tensor": {
"description": "Name of tensor to match.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability:\n`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"reference_tensor"
],
"title": "model.v0_5.ScaleMeanVarianceKwargs",
"type": "object"
},
"ScaleRangeDescr": {
"additionalProperties": false,
"description": "Scale with percentiles.\n\nExamples:\n1. Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... )\n ... ]\n\n 2. Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: scale_range\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(\n ... kwargs=ClipKwargs(\n ... min=0.0,\n ... max=1.0,\n ... )\n ... ),\n ... ]",
"properties": {
"id": {
"const": "scale_range",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleRangeKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ScaleRangeDescr",
"type": "object"
},
"ScaleRangeKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ScaleRangeDescr][]\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"min_percentile": {
"default": 0.0,
"description": "The lower percentile used to determine the value to align with zero.",
"exclusiveMaximum": 100,
"minimum": 0,
"title": "Min Percentile",
"type": "number"
},
"max_percentile": {
"default": 100.0,
"description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
"exclusiveMinimum": 1,
"maximum": 100,
"title": "Max Percentile",
"type": "number"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
},
"reference_tensor": {
"anyOf": [
{
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Tensor ID to compute the percentiles from. Default: The tensor itself.\nFor any tensor in `inputs` only input tensor references are allowed.",
"title": "Reference Tensor"
}
},
"title": "model.v0_5.ScaleRangeKwargs",
"type": "object"
},
"SigmoidDescr": {
"additionalProperties": false,
"description": "The logistic sigmoid function, a.k.a. expit function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: sigmoid\n ```\n- in Python:\n >>> postprocessing = [SigmoidDescr()]",
"properties": {
"id": {
"const": "sigmoid",
"title": "Id",
"type": "string"
}
},
"required": [
"id"
],
"title": "model.v0_5.SigmoidDescr",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
},
"SoftmaxDescr": {
"additionalProperties": false,
"description": "The softmax function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: softmax\n kwargs:\n axis: channel\n ```\n- in Python:\n >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId(\"channel\")))]",
"properties": {
"id": {
"const": "softmax",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/SoftmaxKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.SoftmaxDescr",
"type": "object"
},
"SoftmaxKwargs": {
"additionalProperties": false,
"description": "key word arguments for [SoftmaxDescr][]",
"properties": {
"axis": {
"default": "channel",
"description": "The axis to apply the softmax function along.\nNote:\n Defaults to 'channel' axis\n (which may not exist, in which case\n a different axis id has to be specified).",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"title": "model.v0_5.SoftmaxKwargs",
"type": "object"
},
"SpaceInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceInputAxis",
"type": "object"
},
"SpaceOutputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxis",
"type": "object"
},
"SpaceOutputAxisWithHalo": {
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxisWithHalo",
"type": "object"
},
"TensorflowJsWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "Version of the TensorFlow library used."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.TensorflowJsWeightsDescr",
"type": "object"
},
"TensorflowSavedModelBundleWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "Version of the TensorFlow library used."
},
"dependencies": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "environment.yaml"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Custom dependencies beyond tensorflow.\nShould include tensorflow and any version pinning has to be compatible with **tensorflow_version**."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.TensorflowSavedModelBundleWeightsDescr",
"type": "object"
},
"TimeInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeInputAxis",
"type": "object"
},
"TimeOutputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxis",
"type": "object"
},
"TimeOutputAxisWithHalo": {
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxisWithHalo",
"type": "object"
},
"TorchscriptWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"pytorch_version": {
"$ref": "#/$defs/Version",
"description": "Version of the PyTorch library used."
}
},
"required": [
"source",
"pytorch_version"
],
"title": "model.v0_5.TorchscriptWeightsDescr",
"type": "object"
},
"TrainingDetails": {
"additionalProperties": true,
"properties": {
"training_preprocessing": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Detailed image preprocessing steps during model training:\n\nMention:\n- *Normalization methods*\n- *Augmentation strategies*\n- *Resizing/resampling procedures*\n- *Artifact handling*",
"title": "Training Preprocessing"
},
"training_epochs": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Number of training epochs.",
"title": "Training Epochs"
},
"training_batch_size": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Batch size used in training.",
"title": "Training Batch Size"
},
"initial_learning_rate": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Initial learning rate used in training.",
"title": "Initial Learning Rate"
},
"learning_rate_schedule": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Learning rate schedule used in training.",
"title": "Learning Rate Schedule"
},
"loss_function": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Loss function used in training, e.g. nn.MSELoss.",
"title": "Loss Function"
},
"loss_function_kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `loss_function`",
"title": "Loss Function Kwargs",
"type": "object"
},
"optimizer": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "optimizer, e.g. torch.optim.Adam",
"title": "Optimizer"
},
"optimizer_kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `optimizer`",
"title": "Optimizer Kwargs",
"type": "object"
},
"regularization": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Regularization techniques used during training, e.g. drop-out or weight decay.",
"title": "Regularization"
},
"training_duration": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Total training duration in hours.",
"title": "Training Duration"
}
},
"title": "model.v0_5.TrainingDetails",
"type": "object"
},
"Uploader": {
"additionalProperties": false,
"properties": {
"email": {
"description": "Email",
"format": "email",
"title": "Email",
"type": "string"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "name",
"title": "Name"
}
},
"required": [
"email"
],
"title": "generic.v0_2.Uploader",
"type": "object"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
},
"WeightsDescr": {
"additionalProperties": false,
"properties": {
"keras_hdf5": {
"anyOf": [
{
"$ref": "#/$defs/KerasHdf5WeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"onnx": {
"anyOf": [
{
"$ref": "#/$defs/OnnxWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"pytorch_state_dict": {
"anyOf": [
{
"$ref": "#/$defs/PytorchStateDictWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"tensorflow_js": {
"anyOf": [
{
"$ref": "#/$defs/TensorflowJsWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"tensorflow_saved_model_bundle": {
"anyOf": [
{
"$ref": "#/$defs/TensorflowSavedModelBundleWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"torchscript": {
"anyOf": [
{
"$ref": "#/$defs/TorchscriptWeightsDescr"
},
{
"type": "null"
}
],
"default": null
}
},
"title": "model.v0_5.WeightsDescr",
"type": "object"
},
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
},
"ZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract mean and divide by variance.\n\nExamples:\n Subtract tensor mean and variance\n - in YAML\n ```yaml\n preprocessing:\n - id: zero_mean_unit_variance\n ```\n - in Python\n >>> preprocessing = [ZeroMeanUnitVarianceDescr()]",
"properties": {
"id": {
"const": "zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ZeroMeanUnitVarianceDescr",
"type": "object"
},
"ZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ZeroMeanUnitVarianceDescr][]",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
"type": "object"
},
"bioimageio__spec__dataset__v0_2__DatasetDescr": {
"additionalProperties": false,
"description": "A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage\nprocessing.",
"properties": {
"name": {
"description": "A human-friendly name of the resource description",
"minLength": 1,
"title": "Name",
"type": "string"
},
"description": {
"title": "Description",
"type": "string"
},
"covers": {
"description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg', '.tif', '.tiff')",
"examples": [
[
"cover.png"
]
],
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Covers",
"type": "array"
},
"id_emoji": {
"anyOf": [
{
"examples": [
"\ud83e\udd88",
"\ud83e\udda5"
],
"maxLength": 1,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "UTF-8 emoji for display alongside the `id`.",
"title": "Id Emoji"
},
"authors": {
"description": "The authors are the creators of the RDF and the primary points of contact.",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_2__Author"
},
"title": "Authors",
"type": "array"
},
"attachments": {
"anyOf": [
{
"$ref": "#/$defs/AttachmentsDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "file and other attachments"
},
"cite": {
"description": "citations",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_2__CiteEntry"
},
"title": "Cite",
"type": "array"
},
"config": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "A field for custom configuration that can contain any keys not present in the RDF spec.\nThis means you should not store, for example, a github repo URL in `config` since we already have the\n`git_repo` field defined in the spec.\nKeys in `config` may be very specific to a tool or consumer software. To avoid conflicting definitions,\nit is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,\nfor example:\n```yaml\nconfig:\n bioimageio: # here is the domain name\n my_custom_key: 3837283\n another_key:\n nested: value\n imagej: # config specific to ImageJ\n macro_dir: path/to/macro/file\n```\nIf possible, please use [`snake_case`](https://en.wikipedia.org/wiki/Snake_case) for keys in `config`.\nYou may want to list linked files additionally under `attachments` to include them when packaging a resource\n(packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains\nan altered rdf.yaml file with local references to the downloaded files)",
"examples": [
{
"bioimageio": {
"another_key": {
"nested": "value"
},
"my_custom_key": 3837283
},
"imagej": {
"macro_dir": "path/to/macro/file"
}
}
],
"title": "Config",
"type": "object"
},
"download_url": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to download the resource from (deprecated)",
"title": "Download Url"
},
"git_repo": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A URL to the Git repository where the resource is being developed.",
"examples": [
"https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
],
"title": "Git Repo"
},
"icon": {
"anyOf": [
{
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An icon for illustration",
"title": "Icon"
},
"links": {
"description": "IDs of other bioimage.io resources",
"examples": [
[
"ilastik/ilastik",
"deepimagej/deepimagej",
"zero/notebook_u-net_3d_zerocostdl4mic"
]
],
"items": {
"type": "string"
},
"title": "Links",
"type": "array"
},
"uploader": {
"anyOf": [
{
"$ref": "#/$defs/Uploader"
},
{
"type": "null"
}
],
"default": null,
"description": "The person who uploaded the model (e.g. to bioimage.io)"
},
"maintainers": {
"description": "Maintainers of this resource.\nIf not specified `authors` are maintainers and at least some of them should specify their `github_user` name",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_2__Maintainer"
},
"title": "Maintainers",
"type": "array"
},
"rdf_source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Resource description file (RDF) source; used to keep track of where an rdf.yaml was loaded from.\nDo not set this field in a YAML file.",
"title": "Rdf Source"
},
"tags": {
"description": "Associated tags",
"examples": [
[
"unet2d",
"pytorch",
"nucleus",
"segmentation",
"dsb2018"
]
],
"items": {
"type": "string"
},
"title": "Tags",
"type": "array"
},
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the resource following SemVer 2.0."
},
"version_number": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "version number (n-th published version, not the semantic version)",
"title": "Version Number"
},
"format_version": {
"const": "0.2.4",
"description": "The format version of this resource specification\n(not the `version` of the resource description)\nWhen creating a new resource always use the latest micro/patch version described here.\nThe `format_version` is important for any consumer software to understand how to parse the fields.",
"title": "Format Version",
"type": "string"
},
"badges": {
"description": "badges associated with this resource",
"items": {
"$ref": "#/$defs/BadgeDescr"
},
"title": "Badges",
"type": "array"
},
"documentation": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL or relative path to a markdown file with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.",
"examples": [
"https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
"README.md"
],
"title": "Documentation"
},
"license": {
"anyOf": [
{
"enum": [
"0BSD",
"3D-Slicer-1.0",
"AAL",
"Abstyles",
"AdaCore-doc",
"Adobe-2006",
"Adobe-Display-PostScript",
"Adobe-Glyph",
"Adobe-Utopia",
"ADSL",
"AFL-1.1",
"AFL-1.2",
"AFL-2.0",
"AFL-2.1",
"AFL-3.0",
"Afmparse",
"AGPL-1.0-only",
"AGPL-1.0-or-later",
"AGPL-3.0-only",
"AGPL-3.0-or-later",
"Aladdin",
"AMD-newlib",
"AMDPLPA",
"AML",
"AML-glslang",
"AMPAS",
"ANTLR-PD",
"ANTLR-PD-fallback",
"any-OSI",
"any-OSI-perl-modules",
"Apache-1.0",
"Apache-1.1",
"Apache-2.0",
"APAFML",
"APL-1.0",
"App-s2p",
"APSL-1.0",
"APSL-1.1",
"APSL-1.2",
"APSL-2.0",
"Arphic-1999",
"Artistic-1.0",
"Artistic-1.0-cl8",
"Artistic-1.0-Perl",
"Artistic-2.0",
"Artistic-dist",
"Aspell-RU",
"ASWF-Digital-Assets-1.0",
"ASWF-Digital-Assets-1.1",
"Baekmuk",
"Bahyph",
"Barr",
"bcrypt-Solar-Designer",
"Beerware",
"Bitstream-Charter",
"Bitstream-Vera",
"BitTorrent-1.0",
"BitTorrent-1.1",
"blessing",
"BlueOak-1.0.0",
"Boehm-GC",
"Boehm-GC-without-fee",
"Borceux",
"Brian-Gladman-2-Clause",
"Brian-Gladman-3-Clause",
"BSD-1-Clause",
"BSD-2-Clause",
"BSD-2-Clause-Darwin",
"BSD-2-Clause-first-lines",
"BSD-2-Clause-Patent",
"BSD-2-Clause-pkgconf-disclaimer",
"BSD-2-Clause-Views",
"BSD-3-Clause",
"BSD-3-Clause-acpica",
"BSD-3-Clause-Attribution",
"BSD-3-Clause-Clear",
"BSD-3-Clause-flex",
"BSD-3-Clause-HP",
"BSD-3-Clause-LBNL",
"BSD-3-Clause-Modification",
"BSD-3-Clause-No-Military-License",
"BSD-3-Clause-No-Nuclear-License",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-No-Nuclear-Warranty",
"BSD-3-Clause-Open-MPI",
"BSD-3-Clause-Sun",
"BSD-4-Clause",
"BSD-4-Clause-Shortened",
"BSD-4-Clause-UC",
"BSD-4.3RENO",
"BSD-4.3TAHOE",
"BSD-Advertising-Acknowledgement",
"BSD-Attribution-HPND-disclaimer",
"BSD-Inferno-Nettverk",
"BSD-Protection",
"BSD-Source-beginning-file",
"BSD-Source-Code",
"BSD-Systemics",
"BSD-Systemics-W3Works",
"BSL-1.0",
"BUSL-1.1",
"bzip2-1.0.6",
"C-UDA-1.0",
"CAL-1.0",
"CAL-1.0-Combined-Work-Exception",
"Caldera",
"Caldera-no-preamble",
"Catharon",
"CATOSL-1.1",
"CC-BY-1.0",
"CC-BY-2.0",
"CC-BY-2.5",
"CC-BY-2.5-AU",
"CC-BY-3.0",
"CC-BY-3.0-AT",
"CC-BY-3.0-AU",
"CC-BY-3.0-DE",
"CC-BY-3.0-IGO",
"CC-BY-3.0-NL",
"CC-BY-3.0-US",
"CC-BY-4.0",
"CC-BY-NC-1.0",
"CC-BY-NC-2.0",
"CC-BY-NC-2.5",
"CC-BY-NC-3.0",
"CC-BY-NC-3.0-DE",
"CC-BY-NC-4.0",
"CC-BY-NC-ND-1.0",
"CC-BY-NC-ND-2.0",
"CC-BY-NC-ND-2.5",
"CC-BY-NC-ND-3.0",
"CC-BY-NC-ND-3.0-DE",
"CC-BY-NC-ND-3.0-IGO",
"CC-BY-NC-ND-4.0",
"CC-BY-NC-SA-1.0",
"CC-BY-NC-SA-2.0",
"CC-BY-NC-SA-2.0-DE",
"CC-BY-NC-SA-2.0-FR",
"CC-BY-NC-SA-2.0-UK",
"CC-BY-NC-SA-2.5",
"CC-BY-NC-SA-3.0",
"CC-BY-NC-SA-3.0-DE",
"CC-BY-NC-SA-3.0-IGO",
"CC-BY-NC-SA-4.0",
"CC-BY-ND-1.0",
"CC-BY-ND-2.0",
"CC-BY-ND-2.5",
"CC-BY-ND-3.0",
"CC-BY-ND-3.0-DE",
"CC-BY-ND-4.0",
"CC-BY-SA-1.0",
"CC-BY-SA-2.0",
"CC-BY-SA-2.0-UK",
"CC-BY-SA-2.1-JP",
"CC-BY-SA-2.5",
"CC-BY-SA-3.0",
"CC-BY-SA-3.0-AT",
"CC-BY-SA-3.0-DE",
"CC-BY-SA-3.0-IGO",
"CC-BY-SA-4.0",
"CC-PDDC",
"CC-PDM-1.0",
"CC-SA-1.0",
"CC0-1.0",
"CDDL-1.0",
"CDDL-1.1",
"CDL-1.0",
"CDLA-Permissive-1.0",
"CDLA-Permissive-2.0",
"CDLA-Sharing-1.0",
"CECILL-1.0",
"CECILL-1.1",
"CECILL-2.0",
"CECILL-2.1",
"CECILL-B",
"CECILL-C",
"CERN-OHL-1.1",
"CERN-OHL-1.2",
"CERN-OHL-P-2.0",
"CERN-OHL-S-2.0",
"CERN-OHL-W-2.0",
"CFITSIO",
"check-cvs",
"checkmk",
"ClArtistic",
"Clips",
"CMU-Mach",
"CMU-Mach-nodoc",
"CNRI-Jython",
"CNRI-Python",
"CNRI-Python-GPL-Compatible",
"COIL-1.0",
"Community-Spec-1.0",
"Condor-1.1",
"copyleft-next-0.3.0",
"copyleft-next-0.3.1",
"Cornell-Lossless-JPEG",
"CPAL-1.0",
"CPL-1.0",
"CPOL-1.02",
"Cronyx",
"Crossword",
"CryptoSwift",
"CrystalStacker",
"CUA-OPL-1.0",
"Cube",
"curl",
"cve-tou",
"D-FSL-1.0",
"DEC-3-Clause",
"diffmark",
"DL-DE-BY-2.0",
"DL-DE-ZERO-2.0",
"DOC",
"DocBook-DTD",
"DocBook-Schema",
"DocBook-Stylesheet",
"DocBook-XML",
"Dotseqn",
"DRL-1.0",
"DRL-1.1",
"DSDP",
"dtoa",
"dvipdfm",
"ECL-1.0",
"ECL-2.0",
"EFL-1.0",
"EFL-2.0",
"eGenix",
"Elastic-2.0",
"Entessa",
"EPICS",
"EPL-1.0",
"EPL-2.0",
"ErlPL-1.1",
"etalab-2.0",
"EUDatagrid",
"EUPL-1.0",
"EUPL-1.1",
"EUPL-1.2",
"Eurosym",
"Fair",
"FBM",
"FDK-AAC",
"Ferguson-Twofish",
"Frameworx-1.0",
"FreeBSD-DOC",
"FreeImage",
"FSFAP",
"FSFAP-no-warranty-disclaimer",
"FSFUL",
"FSFULLR",
"FSFULLRSD",
"FSFULLRWD",
"FSL-1.1-ALv2",
"FSL-1.1-MIT",
"FTL",
"Furuseth",
"fwlw",
"Game-Programming-Gems",
"GCR-docs",
"GD",
"generic-xts",
"GFDL-1.1-invariants-only",
"GFDL-1.1-invariants-or-later",
"GFDL-1.1-no-invariants-only",
"GFDL-1.1-no-invariants-or-later",
"GFDL-1.1-only",
"GFDL-1.1-or-later",
"GFDL-1.2-invariants-only",
"GFDL-1.2-invariants-or-later",
"GFDL-1.2-no-invariants-only",
"GFDL-1.2-no-invariants-or-later",
"GFDL-1.2-only",
"GFDL-1.2-or-later",
"GFDL-1.3-invariants-only",
"GFDL-1.3-invariants-or-later",
"GFDL-1.3-no-invariants-only",
"GFDL-1.3-no-invariants-or-later",
"GFDL-1.3-only",
"GFDL-1.3-or-later",
"Giftware",
"GL2PS",
"Glide",
"Glulxe",
"GLWTPL",
"gnuplot",
"GPL-1.0-only",
"GPL-1.0-or-later",
"GPL-2.0-only",
"GPL-2.0-or-later",
"GPL-3.0-only",
"GPL-3.0-or-later",
"Graphics-Gems",
"gSOAP-1.3b",
"gtkbook",
"Gutmann",
"HaskellReport",
"HDF5",
"hdparm",
"HIDAPI",
"Hippocratic-2.1",
"HP-1986",
"HP-1989",
"HPND",
"HPND-DEC",
"HPND-doc",
"HPND-doc-sell",
"HPND-export-US",
"HPND-export-US-acknowledgement",
"HPND-export-US-modify",
"HPND-export2-US",
"HPND-Fenneberg-Livingston",
"HPND-INRIA-IMAG",
"HPND-Intel",
"HPND-Kevlin-Henney",
"HPND-Markus-Kuhn",
"HPND-merchantability-variant",
"HPND-MIT-disclaimer",
"HPND-Netrek",
"HPND-Pbmplus",
"HPND-sell-MIT-disclaimer-xserver",
"HPND-sell-regexpr",
"HPND-sell-variant",
"HPND-sell-variant-MIT-disclaimer",
"HPND-sell-variant-MIT-disclaimer-rev",
"HPND-UC",
"HPND-UC-export-US",
"HTMLTIDY",
"IBM-pibs",
"ICU",
"IEC-Code-Components-EULA",
"IJG",
"IJG-short",
"ImageMagick",
"iMatix",
"Imlib2",
"Info-ZIP",
"Inner-Net-2.0",
"InnoSetup",
"Intel",
"Intel-ACPI",
"Interbase-1.0",
"IPA",
"IPL-1.0",
"ISC",
"ISC-Veillard",
"Jam",
"JasPer-2.0",
"jove",
"JPL-image",
"JPNIC",
"JSON",
"Kastrup",
"Kazlib",
"Knuth-CTAN",
"LAL-1.2",
"LAL-1.3",
"Latex2e",
"Latex2e-translated-notice",
"Leptonica",
"LGPL-2.0-only",
"LGPL-2.0-or-later",
"LGPL-2.1-only",
"LGPL-2.1-or-later",
"LGPL-3.0-only",
"LGPL-3.0-or-later",
"LGPLLR",
"Libpng",
"libpng-1.6.35",
"libpng-2.0",
"libselinux-1.0",
"libtiff",
"libutil-David-Nugent",
"LiLiQ-P-1.1",
"LiLiQ-R-1.1",
"LiLiQ-Rplus-1.1",
"Linux-man-pages-1-para",
"Linux-man-pages-copyleft",
"Linux-man-pages-copyleft-2-para",
"Linux-man-pages-copyleft-var",
"Linux-OpenIB",
"LOOP",
"LPD-document",
"LPL-1.0",
"LPL-1.02",
"LPPL-1.0",
"LPPL-1.1",
"LPPL-1.2",
"LPPL-1.3a",
"LPPL-1.3c",
"lsof",
"Lucida-Bitmap-Fonts",
"LZMA-SDK-9.11-to-9.20",
"LZMA-SDK-9.22",
"Mackerras-3-Clause",
"Mackerras-3-Clause-acknowledgment",
"magaz",
"mailprio",
"MakeIndex",
"man2html",
"Martin-Birgmeier",
"McPhee-slideshow",
"metamail",
"Minpack",
"MIPS",
"MirOS",
"MIT",
"MIT-0",
"MIT-advertising",
"MIT-Click",
"MIT-CMU",
"MIT-enna",
"MIT-feh",
"MIT-Festival",
"MIT-Khronos-old",
"MIT-Modern-Variant",
"MIT-open-group",
"MIT-testregex",
"MIT-Wu",
"MITNFA",
"MMIXware",
"Motosoto",
"MPEG-SSG",
"mpi-permissive",
"mpich2",
"MPL-1.0",
"MPL-1.1",
"MPL-2.0",
"MPL-2.0-no-copyleft-exception",
"mplus",
"MS-LPL",
"MS-PL",
"MS-RL",
"MTLL",
"MulanPSL-1.0",
"MulanPSL-2.0",
"Multics",
"Mup",
"NAIST-2003",
"NASA-1.3",
"Naumen",
"NBPL-1.0",
"NCBI-PD",
"NCGL-UK-2.0",
"NCL",
"NCSA",
"NetCDF",
"Newsletr",
"NGPL",
"ngrep",
"NICTA-1.0",
"NIST-PD",
"NIST-PD-fallback",
"NIST-Software",
"NLOD-1.0",
"NLOD-2.0",
"NLPL",
"Nokia",
"NOSL",
"Noweb",
"NPL-1.0",
"NPL-1.1",
"NPOSL-3.0",
"NRL",
"NTIA-PD",
"NTP",
"NTP-0",
"O-UDA-1.0",
"OAR",
"OCCT-PL",
"OCLC-2.0",
"ODbL-1.0",
"ODC-By-1.0",
"OFFIS",
"OFL-1.0",
"OFL-1.0-no-RFN",
"OFL-1.0-RFN",
"OFL-1.1",
"OFL-1.1-no-RFN",
"OFL-1.1-RFN",
"OGC-1.0",
"OGDL-Taiwan-1.0",
"OGL-Canada-2.0",
"OGL-UK-1.0",
"OGL-UK-2.0",
"OGL-UK-3.0",
"OGTSL",
"OLDAP-1.1",
"OLDAP-1.2",
"OLDAP-1.3",
"OLDAP-1.4",
"OLDAP-2.0",
"OLDAP-2.0.1",
"OLDAP-2.1",
"OLDAP-2.2",
"OLDAP-2.2.1",
"OLDAP-2.2.2",
"OLDAP-2.3",
"OLDAP-2.4",
"OLDAP-2.5",
"OLDAP-2.6",
"OLDAP-2.7",
"OLDAP-2.8",
"OLFL-1.3",
"OML",
"OpenPBS-2.3",
"OpenSSL",
"OpenSSL-standalone",
"OpenVision",
"OPL-1.0",
"OPL-UK-3.0",
"OPUBL-1.0",
"OSET-PL-2.1",
"OSL-1.0",
"OSL-1.1",
"OSL-2.0",
"OSL-2.1",
"OSL-3.0",
"PADL",
"Parity-6.0.0",
"Parity-7.0.0",
"PDDL-1.0",
"PHP-3.0",
"PHP-3.01",
"Pixar",
"pkgconf",
"Plexus",
"pnmstitch",
"PolyForm-Noncommercial-1.0.0",
"PolyForm-Small-Business-1.0.0",
"PostgreSQL",
"PPL",
"PSF-2.0",
"psfrag",
"psutils",
"Python-2.0",
"Python-2.0.1",
"python-ldap",
"Qhull",
"QPL-1.0",
"QPL-1.0-INRIA-2004",
"radvd",
"Rdisc",
"RHeCos-1.1",
"RPL-1.1",
"RPL-1.5",
"RPSL-1.0",
"RSA-MD",
"RSCPL",
"Ruby",
"Ruby-pty",
"SAX-PD",
"SAX-PD-2.0",
"Saxpath",
"SCEA",
"SchemeReport",
"Sendmail",
"Sendmail-8.23",
"Sendmail-Open-Source-1.1",
"SGI-B-1.0",
"SGI-B-1.1",
"SGI-B-2.0",
"SGI-OpenGL",
"SGP4",
"SHL-0.5",
"SHL-0.51",
"SimPL-2.0",
"SISSL",
"SISSL-1.2",
"SL",
"Sleepycat",
"SMAIL-GPL",
"SMLNJ",
"SMPPL",
"SNIA",
"snprintf",
"SOFA",
"softSurfer",
"Soundex",
"Spencer-86",
"Spencer-94",
"Spencer-99",
"SPL-1.0",
"ssh-keyscan",
"SSH-OpenSSH",
"SSH-short",
"SSLeay-standalone",
"SSPL-1.0",
"SugarCRM-1.1.3",
"SUL-1.0",
"Sun-PPP",
"Sun-PPP-2000",
"SunPro",
"SWL",
"swrule",
"Symlinks",
"TAPR-OHL-1.0",
"TCL",
"TCP-wrappers",
"TermReadKey",
"TGPPL-1.0",
"ThirdEye",
"threeparttable",
"TMate",
"TORQUE-1.1",
"TOSL",
"TPDL",
"TPL-1.0",
"TrustedQSL",
"TTWL",
"TTYP0",
"TU-Berlin-1.0",
"TU-Berlin-2.0",
"Ubuntu-font-1.0",
"UCAR",
"UCL-1.0",
"ulem",
"UMich-Merit",
"Unicode-3.0",
"Unicode-DFS-2015",
"Unicode-DFS-2016",
"Unicode-TOU",
"UnixCrypt",
"Unlicense",
"Unlicense-libtelnet",
"Unlicense-libwhirlpool",
"UPL-1.0",
"URT-RLE",
"Vim",
"VOSTROM",
"VSL-1.0",
"W3C",
"W3C-19980720",
"W3C-20150513",
"w3m",
"Watcom-1.0",
"Widget-Workshop",
"Wsuipa",
"WTFPL",
"wwl",
"X11",
"X11-distribute-modifications-variant",
"X11-swapped",
"Xdebug-1.03",
"Xerox",
"Xfig",
"XFree86-1.1",
"xinetd",
"xkeyboard-config-Zinoviev",
"xlock",
"Xnet",
"xpp",
"XSkat",
"xzoom",
"YPL-1.0",
"YPL-1.1",
"Zed",
"Zeeff",
"Zend-2.0",
"Zimbra-1.3",
"Zimbra-1.4",
"Zlib",
"zlib-acknowledgement",
"ZPL-1.1",
"ZPL-2.0",
"ZPL-2.1"
],
"title": "LicenseId",
"type": "string"
},
{
"enum": [
"AGPL-1.0",
"AGPL-3.0",
"BSD-2-Clause-FreeBSD",
"BSD-2-Clause-NetBSD",
"bzip2-1.0.5",
"eCos-2.0",
"GFDL-1.1",
"GFDL-1.2",
"GFDL-1.3",
"GPL-1.0",
"GPL-1.0+",
"GPL-2.0",
"GPL-2.0+",
"GPL-2.0-with-autoconf-exception",
"GPL-2.0-with-bison-exception",
"GPL-2.0-with-classpath-exception",
"GPL-2.0-with-font-exception",
"GPL-2.0-with-GCC-exception",
"GPL-3.0",
"GPL-3.0+",
"GPL-3.0-with-autoconf-exception",
"GPL-3.0-with-GCC-exception",
"LGPL-2.0",
"LGPL-2.0+",
"LGPL-2.1",
"LGPL-2.1+",
"LGPL-3.0",
"LGPL-3.0+",
"Net-SNMP",
"Nunit",
"StandardML-NJ",
"wxWindows"
],
"title": "DeprecatedLicenseId",
"type": "string"
},
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose\n) to discuss your intentions with the community.",
"examples": [
"CC0-1.0",
"MIT",
"BSD-2-Clause"
],
"title": "License"
},
"type": {
"const": "dataset",
"title": "Type",
"type": "string"
},
"id": {
"anyOf": [
{
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
"title": "Id"
},
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "\"URL to the source of the dataset.",
"title": "Source"
}
},
"required": [
"name",
"description",
"format_version",
"type"
],
"title": "dataset 0.2.4",
"type": "object"
},
"bioimageio__spec__dataset__v0_3__DatasetDescr": {
"additionalProperties": false,
"description": "A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage\nprocessing.",
"properties": {
"name": {
"description": "A human-friendly name of the resource description.\nMay only contains letters, digits, underscore, minus, parentheses and spaces.",
"maxLength": 128,
"minLength": 5,
"title": "Name",
"type": "string"
},
"description": {
"default": "",
"description": "A string containing a brief description.",
"maxLength": 1024,
"title": "Description",
"type": "string"
},
"covers": {
"description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1 or 1:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg')",
"examples": [
[
"cover.png"
]
],
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Covers",
"type": "array"
},
"id_emoji": {
"anyOf": [
{
"examples": [
"\ud83e\udd88",
"\ud83e\udda5"
],
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "UTF-8 emoji for display alongside the `id`.",
"title": "Id Emoji"
},
"authors": {
"description": "The authors are the creators of this resource description and the primary points of contact.",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"title": "Authors",
"type": "array"
},
"attachments": {
"description": "file attachments",
"items": {
"$ref": "#/$defs/FileDescr"
},
"title": "Attachments",
"type": "array"
},
"cite": {
"description": "citations",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__CiteEntry"
},
"title": "Cite",
"type": "array"
},
"license": {
"anyOf": [
{
"enum": [
"0BSD",
"3D-Slicer-1.0",
"AAL",
"Abstyles",
"AdaCore-doc",
"Adobe-2006",
"Adobe-Display-PostScript",
"Adobe-Glyph",
"Adobe-Utopia",
"ADSL",
"AFL-1.1",
"AFL-1.2",
"AFL-2.0",
"AFL-2.1",
"AFL-3.0",
"Afmparse",
"AGPL-1.0-only",
"AGPL-1.0-or-later",
"AGPL-3.0-only",
"AGPL-3.0-or-later",
"Aladdin",
"AMD-newlib",
"AMDPLPA",
"AML",
"AML-glslang",
"AMPAS",
"ANTLR-PD",
"ANTLR-PD-fallback",
"any-OSI",
"any-OSI-perl-modules",
"Apache-1.0",
"Apache-1.1",
"Apache-2.0",
"APAFML",
"APL-1.0",
"App-s2p",
"APSL-1.0",
"APSL-1.1",
"APSL-1.2",
"APSL-2.0",
"Arphic-1999",
"Artistic-1.0",
"Artistic-1.0-cl8",
"Artistic-1.0-Perl",
"Artistic-2.0",
"Artistic-dist",
"Aspell-RU",
"ASWF-Digital-Assets-1.0",
"ASWF-Digital-Assets-1.1",
"Baekmuk",
"Bahyph",
"Barr",
"bcrypt-Solar-Designer",
"Beerware",
"Bitstream-Charter",
"Bitstream-Vera",
"BitTorrent-1.0",
"BitTorrent-1.1",
"blessing",
"BlueOak-1.0.0",
"Boehm-GC",
"Boehm-GC-without-fee",
"Borceux",
"Brian-Gladman-2-Clause",
"Brian-Gladman-3-Clause",
"BSD-1-Clause",
"BSD-2-Clause",
"BSD-2-Clause-Darwin",
"BSD-2-Clause-first-lines",
"BSD-2-Clause-Patent",
"BSD-2-Clause-pkgconf-disclaimer",
"BSD-2-Clause-Views",
"BSD-3-Clause",
"BSD-3-Clause-acpica",
"BSD-3-Clause-Attribution",
"BSD-3-Clause-Clear",
"BSD-3-Clause-flex",
"BSD-3-Clause-HP",
"BSD-3-Clause-LBNL",
"BSD-3-Clause-Modification",
"BSD-3-Clause-No-Military-License",
"BSD-3-Clause-No-Nuclear-License",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-No-Nuclear-Warranty",
"BSD-3-Clause-Open-MPI",
"BSD-3-Clause-Sun",
"BSD-4-Clause",
"BSD-4-Clause-Shortened",
"BSD-4-Clause-UC",
"BSD-4.3RENO",
"BSD-4.3TAHOE",
"BSD-Advertising-Acknowledgement",
"BSD-Attribution-HPND-disclaimer",
"BSD-Inferno-Nettverk",
"BSD-Protection",
"BSD-Source-beginning-file",
"BSD-Source-Code",
"BSD-Systemics",
"BSD-Systemics-W3Works",
"BSL-1.0",
"BUSL-1.1",
"bzip2-1.0.6",
"C-UDA-1.0",
"CAL-1.0",
"CAL-1.0-Combined-Work-Exception",
"Caldera",
"Caldera-no-preamble",
"Catharon",
"CATOSL-1.1",
"CC-BY-1.0",
"CC-BY-2.0",
"CC-BY-2.5",
"CC-BY-2.5-AU",
"CC-BY-3.0",
"CC-BY-3.0-AT",
"CC-BY-3.0-AU",
"CC-BY-3.0-DE",
"CC-BY-3.0-IGO",
"CC-BY-3.0-NL",
"CC-BY-3.0-US",
"CC-BY-4.0",
"CC-BY-NC-1.0",
"CC-BY-NC-2.0",
"CC-BY-NC-2.5",
"CC-BY-NC-3.0",
"CC-BY-NC-3.0-DE",
"CC-BY-NC-4.0",
"CC-BY-NC-ND-1.0",
"CC-BY-NC-ND-2.0",
"CC-BY-NC-ND-2.5",
"CC-BY-NC-ND-3.0",
"CC-BY-NC-ND-3.0-DE",
"CC-BY-NC-ND-3.0-IGO",
"CC-BY-NC-ND-4.0",
"CC-BY-NC-SA-1.0",
"CC-BY-NC-SA-2.0",
"CC-BY-NC-SA-2.0-DE",
"CC-BY-NC-SA-2.0-FR",
"CC-BY-NC-SA-2.0-UK",
"CC-BY-NC-SA-2.5",
"CC-BY-NC-SA-3.0",
"CC-BY-NC-SA-3.0-DE",
"CC-BY-NC-SA-3.0-IGO",
"CC-BY-NC-SA-4.0",
"CC-BY-ND-1.0",
"CC-BY-ND-2.0",
"CC-BY-ND-2.5",
"CC-BY-ND-3.0",
"CC-BY-ND-3.0-DE",
"CC-BY-ND-4.0",
"CC-BY-SA-1.0",
"CC-BY-SA-2.0",
"CC-BY-SA-2.0-UK",
"CC-BY-SA-2.1-JP",
"CC-BY-SA-2.5",
"CC-BY-SA-3.0",
"CC-BY-SA-3.0-AT",
"CC-BY-SA-3.0-DE",
"CC-BY-SA-3.0-IGO",
"CC-BY-SA-4.0",
"CC-PDDC",
"CC-PDM-1.0",
"CC-SA-1.0",
"CC0-1.0",
"CDDL-1.0",
"CDDL-1.1",
"CDL-1.0",
"CDLA-Permissive-1.0",
"CDLA-Permissive-2.0",
"CDLA-Sharing-1.0",
"CECILL-1.0",
"CECILL-1.1",
"CECILL-2.0",
"CECILL-2.1",
"CECILL-B",
"CECILL-C",
"CERN-OHL-1.1",
"CERN-OHL-1.2",
"CERN-OHL-P-2.0",
"CERN-OHL-S-2.0",
"CERN-OHL-W-2.0",
"CFITSIO",
"check-cvs",
"checkmk",
"ClArtistic",
"Clips",
"CMU-Mach",
"CMU-Mach-nodoc",
"CNRI-Jython",
"CNRI-Python",
"CNRI-Python-GPL-Compatible",
"COIL-1.0",
"Community-Spec-1.0",
"Condor-1.1",
"copyleft-next-0.3.0",
"copyleft-next-0.3.1",
"Cornell-Lossless-JPEG",
"CPAL-1.0",
"CPL-1.0",
"CPOL-1.02",
"Cronyx",
"Crossword",
"CryptoSwift",
"CrystalStacker",
"CUA-OPL-1.0",
"Cube",
"curl",
"cve-tou",
"D-FSL-1.0",
"DEC-3-Clause",
"diffmark",
"DL-DE-BY-2.0",
"DL-DE-ZERO-2.0",
"DOC",
"DocBook-DTD",
"DocBook-Schema",
"DocBook-Stylesheet",
"DocBook-XML",
"Dotseqn",
"DRL-1.0",
"DRL-1.1",
"DSDP",
"dtoa",
"dvipdfm",
"ECL-1.0",
"ECL-2.0",
"EFL-1.0",
"EFL-2.0",
"eGenix",
"Elastic-2.0",
"Entessa",
"EPICS",
"EPL-1.0",
"EPL-2.0",
"ErlPL-1.1",
"etalab-2.0",
"EUDatagrid",
"EUPL-1.0",
"EUPL-1.1",
"EUPL-1.2",
"Eurosym",
"Fair",
"FBM",
"FDK-AAC",
"Ferguson-Twofish",
"Frameworx-1.0",
"FreeBSD-DOC",
"FreeImage",
"FSFAP",
"FSFAP-no-warranty-disclaimer",
"FSFUL",
"FSFULLR",
"FSFULLRSD",
"FSFULLRWD",
"FSL-1.1-ALv2",
"FSL-1.1-MIT",
"FTL",
"Furuseth",
"fwlw",
"Game-Programming-Gems",
"GCR-docs",
"GD",
"generic-xts",
"GFDL-1.1-invariants-only",
"GFDL-1.1-invariants-or-later",
"GFDL-1.1-no-invariants-only",
"GFDL-1.1-no-invariants-or-later",
"GFDL-1.1-only",
"GFDL-1.1-or-later",
"GFDL-1.2-invariants-only",
"GFDL-1.2-invariants-or-later",
"GFDL-1.2-no-invariants-only",
"GFDL-1.2-no-invariants-or-later",
"GFDL-1.2-only",
"GFDL-1.2-or-later",
"GFDL-1.3-invariants-only",
"GFDL-1.3-invariants-or-later",
"GFDL-1.3-no-invariants-only",
"GFDL-1.3-no-invariants-or-later",
"GFDL-1.3-only",
"GFDL-1.3-or-later",
"Giftware",
"GL2PS",
"Glide",
"Glulxe",
"GLWTPL",
"gnuplot",
"GPL-1.0-only",
"GPL-1.0-or-later",
"GPL-2.0-only",
"GPL-2.0-or-later",
"GPL-3.0-only",
"GPL-3.0-or-later",
"Graphics-Gems",
"gSOAP-1.3b",
"gtkbook",
"Gutmann",
"HaskellReport",
"HDF5",
"hdparm",
"HIDAPI",
"Hippocratic-2.1",
"HP-1986",
"HP-1989",
"HPND",
"HPND-DEC",
"HPND-doc",
"HPND-doc-sell",
"HPND-export-US",
"HPND-export-US-acknowledgement",
"HPND-export-US-modify",
"HPND-export2-US",
"HPND-Fenneberg-Livingston",
"HPND-INRIA-IMAG",
"HPND-Intel",
"HPND-Kevlin-Henney",
"HPND-Markus-Kuhn",
"HPND-merchantability-variant",
"HPND-MIT-disclaimer",
"HPND-Netrek",
"HPND-Pbmplus",
"HPND-sell-MIT-disclaimer-xserver",
"HPND-sell-regexpr",
"HPND-sell-variant",
"HPND-sell-variant-MIT-disclaimer",
"HPND-sell-variant-MIT-disclaimer-rev",
"HPND-UC",
"HPND-UC-export-US",
"HTMLTIDY",
"IBM-pibs",
"ICU",
"IEC-Code-Components-EULA",
"IJG",
"IJG-short",
"ImageMagick",
"iMatix",
"Imlib2",
"Info-ZIP",
"Inner-Net-2.0",
"InnoSetup",
"Intel",
"Intel-ACPI",
"Interbase-1.0",
"IPA",
"IPL-1.0",
"ISC",
"ISC-Veillard",
"Jam",
"JasPer-2.0",
"jove",
"JPL-image",
"JPNIC",
"JSON",
"Kastrup",
"Kazlib",
"Knuth-CTAN",
"LAL-1.2",
"LAL-1.3",
"Latex2e",
"Latex2e-translated-notice",
"Leptonica",
"LGPL-2.0-only",
"LGPL-2.0-or-later",
"LGPL-2.1-only",
"LGPL-2.1-or-later",
"LGPL-3.0-only",
"LGPL-3.0-or-later",
"LGPLLR",
"Libpng",
"libpng-1.6.35",
"libpng-2.0",
"libselinux-1.0",
"libtiff",
"libutil-David-Nugent",
"LiLiQ-P-1.1",
"LiLiQ-R-1.1",
"LiLiQ-Rplus-1.1",
"Linux-man-pages-1-para",
"Linux-man-pages-copyleft",
"Linux-man-pages-copyleft-2-para",
"Linux-man-pages-copyleft-var",
"Linux-OpenIB",
"LOOP",
"LPD-document",
"LPL-1.0",
"LPL-1.02",
"LPPL-1.0",
"LPPL-1.1",
"LPPL-1.2",
"LPPL-1.3a",
"LPPL-1.3c",
"lsof",
"Lucida-Bitmap-Fonts",
"LZMA-SDK-9.11-to-9.20",
"LZMA-SDK-9.22",
"Mackerras-3-Clause",
"Mackerras-3-Clause-acknowledgment",
"magaz",
"mailprio",
"MakeIndex",
"man2html",
"Martin-Birgmeier",
"McPhee-slideshow",
"metamail",
"Minpack",
"MIPS",
"MirOS",
"MIT",
"MIT-0",
"MIT-advertising",
"MIT-Click",
"MIT-CMU",
"MIT-enna",
"MIT-feh",
"MIT-Festival",
"MIT-Khronos-old",
"MIT-Modern-Variant",
"MIT-open-group",
"MIT-testregex",
"MIT-Wu",
"MITNFA",
"MMIXware",
"Motosoto",
"MPEG-SSG",
"mpi-permissive",
"mpich2",
"MPL-1.0",
"MPL-1.1",
"MPL-2.0",
"MPL-2.0-no-copyleft-exception",
"mplus",
"MS-LPL",
"MS-PL",
"MS-RL",
"MTLL",
"MulanPSL-1.0",
"MulanPSL-2.0",
"Multics",
"Mup",
"NAIST-2003",
"NASA-1.3",
"Naumen",
"NBPL-1.0",
"NCBI-PD",
"NCGL-UK-2.0",
"NCL",
"NCSA",
"NetCDF",
"Newsletr",
"NGPL",
"ngrep",
"NICTA-1.0",
"NIST-PD",
"NIST-PD-fallback",
"NIST-Software",
"NLOD-1.0",
"NLOD-2.0",
"NLPL",
"Nokia",
"NOSL",
"Noweb",
"NPL-1.0",
"NPL-1.1",
"NPOSL-3.0",
"NRL",
"NTIA-PD",
"NTP",
"NTP-0",
"O-UDA-1.0",
"OAR",
"OCCT-PL",
"OCLC-2.0",
"ODbL-1.0",
"ODC-By-1.0",
"OFFIS",
"OFL-1.0",
"OFL-1.0-no-RFN",
"OFL-1.0-RFN",
"OFL-1.1",
"OFL-1.1-no-RFN",
"OFL-1.1-RFN",
"OGC-1.0",
"OGDL-Taiwan-1.0",
"OGL-Canada-2.0",
"OGL-UK-1.0",
"OGL-UK-2.0",
"OGL-UK-3.0",
"OGTSL",
"OLDAP-1.1",
"OLDAP-1.2",
"OLDAP-1.3",
"OLDAP-1.4",
"OLDAP-2.0",
"OLDAP-2.0.1",
"OLDAP-2.1",
"OLDAP-2.2",
"OLDAP-2.2.1",
"OLDAP-2.2.2",
"OLDAP-2.3",
"OLDAP-2.4",
"OLDAP-2.5",
"OLDAP-2.6",
"OLDAP-2.7",
"OLDAP-2.8",
"OLFL-1.3",
"OML",
"OpenPBS-2.3",
"OpenSSL",
"OpenSSL-standalone",
"OpenVision",
"OPL-1.0",
"OPL-UK-3.0",
"OPUBL-1.0",
"OSET-PL-2.1",
"OSL-1.0",
"OSL-1.1",
"OSL-2.0",
"OSL-2.1",
"OSL-3.0",
"PADL",
"Parity-6.0.0",
"Parity-7.0.0",
"PDDL-1.0",
"PHP-3.0",
"PHP-3.01",
"Pixar",
"pkgconf",
"Plexus",
"pnmstitch",
"PolyForm-Noncommercial-1.0.0",
"PolyForm-Small-Business-1.0.0",
"PostgreSQL",
"PPL",
"PSF-2.0",
"psfrag",
"psutils",
"Python-2.0",
"Python-2.0.1",
"python-ldap",
"Qhull",
"QPL-1.0",
"QPL-1.0-INRIA-2004",
"radvd",
"Rdisc",
"RHeCos-1.1",
"RPL-1.1",
"RPL-1.5",
"RPSL-1.0",
"RSA-MD",
"RSCPL",
"Ruby",
"Ruby-pty",
"SAX-PD",
"SAX-PD-2.0",
"Saxpath",
"SCEA",
"SchemeReport",
"Sendmail",
"Sendmail-8.23",
"Sendmail-Open-Source-1.1",
"SGI-B-1.0",
"SGI-B-1.1",
"SGI-B-2.0",
"SGI-OpenGL",
"SGP4",
"SHL-0.5",
"SHL-0.51",
"SimPL-2.0",
"SISSL",
"SISSL-1.2",
"SL",
"Sleepycat",
"SMAIL-GPL",
"SMLNJ",
"SMPPL",
"SNIA",
"snprintf",
"SOFA",
"softSurfer",
"Soundex",
"Spencer-86",
"Spencer-94",
"Spencer-99",
"SPL-1.0",
"ssh-keyscan",
"SSH-OpenSSH",
"SSH-short",
"SSLeay-standalone",
"SSPL-1.0",
"SugarCRM-1.1.3",
"SUL-1.0",
"Sun-PPP",
"Sun-PPP-2000",
"SunPro",
"SWL",
"swrule",
"Symlinks",
"TAPR-OHL-1.0",
"TCL",
"TCP-wrappers",
"TermReadKey",
"TGPPL-1.0",
"ThirdEye",
"threeparttable",
"TMate",
"TORQUE-1.1",
"TOSL",
"TPDL",
"TPL-1.0",
"TrustedQSL",
"TTWL",
"TTYP0",
"TU-Berlin-1.0",
"TU-Berlin-2.0",
"Ubuntu-font-1.0",
"UCAR",
"UCL-1.0",
"ulem",
"UMich-Merit",
"Unicode-3.0",
"Unicode-DFS-2015",
"Unicode-DFS-2016",
"Unicode-TOU",
"UnixCrypt",
"Unlicense",
"Unlicense-libtelnet",
"Unlicense-libwhirlpool",
"UPL-1.0",
"URT-RLE",
"Vim",
"VOSTROM",
"VSL-1.0",
"W3C",
"W3C-19980720",
"W3C-20150513",
"w3m",
"Watcom-1.0",
"Widget-Workshop",
"Wsuipa",
"WTFPL",
"wwl",
"X11",
"X11-distribute-modifications-variant",
"X11-swapped",
"Xdebug-1.03",
"Xerox",
"Xfig",
"XFree86-1.1",
"xinetd",
"xkeyboard-config-Zinoviev",
"xlock",
"Xnet",
"xpp",
"XSkat",
"xzoom",
"YPL-1.0",
"YPL-1.1",
"Zed",
"Zeeff",
"Zend-2.0",
"Zimbra-1.3",
"Zimbra-1.4",
"Zlib",
"zlib-acknowledgement",
"ZPL-1.1",
"ZPL-2.0",
"ZPL-2.1"
],
"title": "LicenseId",
"type": "string"
},
{
"enum": [
"AGPL-1.0",
"AGPL-3.0",
"BSD-2-Clause-FreeBSD",
"BSD-2-Clause-NetBSD",
"bzip2-1.0.5",
"eCos-2.0",
"GFDL-1.1",
"GFDL-1.2",
"GFDL-1.3",
"GPL-1.0",
"GPL-1.0+",
"GPL-2.0",
"GPL-2.0+",
"GPL-2.0-with-autoconf-exception",
"GPL-2.0-with-bison-exception",
"GPL-2.0-with-classpath-exception",
"GPL-2.0-with-font-exception",
"GPL-2.0-with-GCC-exception",
"GPL-3.0",
"GPL-3.0+",
"GPL-3.0-with-autoconf-exception",
"GPL-3.0-with-GCC-exception",
"LGPL-2.0",
"LGPL-2.0+",
"LGPL-2.1",
"LGPL-2.1+",
"LGPL-3.0",
"LGPL-3.0+",
"Net-SNMP",
"Nunit",
"StandardML-NJ",
"wxWindows"
],
"title": "DeprecatedLicenseId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose)\nto discuss your intentions with the community.",
"examples": [
"CC0-1.0",
"MIT",
"BSD-2-Clause"
],
"title": "License"
},
"git_repo": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A URL to the Git repository where the resource is being developed.",
"examples": [
"https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
],
"title": "Git Repo"
},
"icon": {
"anyOf": [
{
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An icon for illustration, e.g. on bioimage.io",
"title": "Icon"
},
"links": {
"description": "IDs of other bioimage.io resources",
"examples": [
[
"ilastik/ilastik",
"deepimagej/deepimagej",
"zero/notebook_u-net_3d_zerocostdl4mic"
]
],
"items": {
"type": "string"
},
"title": "Links",
"type": "array"
},
"uploader": {
"anyOf": [
{
"$ref": "#/$defs/Uploader"
},
{
"type": "null"
}
],
"default": null,
"description": "The person who uploaded the model (e.g. to bioimage.io)"
},
"maintainers": {
"description": "Maintainers of this resource.\nIf not specified, `authors` are maintainers and at least some of them has to specify their `github_user` name",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Maintainer"
},
"title": "Maintainers",
"type": "array"
},
"tags": {
"description": "Associated tags",
"examples": [
[
"unet2d",
"pytorch",
"nucleus",
"segmentation",
"dsb2018"
]
],
"items": {
"type": "string"
},
"title": "Tags",
"type": "array"
},
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the resource following SemVer 2.0."
},
"version_comment": {
"anyOf": [
{
"maxLength": 512,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A comment on the version of the resource.",
"title": "Version Comment"
},
"format_version": {
"const": "0.3.0",
"description": "The **format** version of this resource specification",
"title": "Format Version",
"type": "string"
},
"documentation": {
"anyOf": [
{
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"examples": [
"https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
"README.md"
]
},
{
"type": "null"
}
],
"default": null,
"description": "URL or relative path to a markdown file encoded in UTF-8 with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.",
"title": "Documentation"
},
"badges": {
"description": "badges associated with this resource",
"items": {
"$ref": "#/$defs/BadgeDescr"
},
"title": "Badges",
"type": "array"
},
"config": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Config",
"description": "A field for custom configuration that can contain any keys not present in the RDF spec.\nThis means you should not store, for example, a GitHub repo URL in `config` since there is a `git_repo` field.\nKeys in `config` may be very specific to a tool or consumer software. To avoid conflicting definitions,\nit is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,\nfor example:\n```yaml\nconfig:\n giraffe_neckometer: # here is the domain name\n length: 3837283\n address:\n home: zoo\n imagej: # config specific to ImageJ\n macro_dir: path/to/macro/file\n```\nIf possible, please use [`snake_case`](https://en.wikipedia.org/wiki/Snake_case) for keys in `config`.\nYou may want to list linked files additionally under `attachments` to include them when packaging a resource.\n(Packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains\nan altered rdf.yaml file with local references to the downloaded files.)"
},
"type": {
"const": "dataset",
"title": "Type",
"type": "string"
},
"id": {
"anyOf": [
{
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
"title": "Id"
},
"parent": {
"anyOf": [
{
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The description from which this one is derived",
"title": "Parent"
},
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "\"URL to the source of the dataset.",
"title": "Source"
}
},
"required": [
"name",
"format_version",
"type"
],
"title": "dataset 0.3.0",
"type": "object"
},
"bioimageio__spec__generic__v0_2__Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_2.Author",
"type": "object"
},
"bioimageio__spec__generic__v0_2__CiteEntry": {
"additionalProperties": false,
"properties": {
"text": {
"description": "free text description",
"title": "Text",
"type": "string"
},
"doi": {
"anyOf": [
{
"description": "A digital object identifier, see https://www.doi.org/",
"pattern": "^10\\.[0-9]{4}.+$",
"title": "Doi",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details. (alternatively specify `url`)",
"title": "Doi"
},
"url": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to cite (preferably specify a `doi` instead)",
"title": "Url"
}
},
"required": [
"text"
],
"title": "generic.v0_2.CiteEntry",
"type": "object"
},
"bioimageio__spec__generic__v0_2__Maintainer": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Name"
},
"github_user": {
"title": "Github User",
"type": "string"
}
},
"required": [
"github_user"
],
"title": "generic.v0_2.Maintainer",
"type": "object"
},
"bioimageio__spec__generic__v0_3__Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"bioimageio__spec__generic__v0_3__BioimageioConfig": {
"additionalProperties": true,
"description": "bioimage.io internal metadata.",
"properties": {},
"title": "generic.v0_3.BioimageioConfig",
"type": "object"
},
"bioimageio__spec__generic__v0_3__CiteEntry": {
"additionalProperties": false,
"description": "A citation that should be referenced in work using this resource.",
"properties": {
"text": {
"description": "free text description",
"title": "Text",
"type": "string"
},
"doi": {
"anyOf": [
{
"description": "A digital object identifier, see https://www.doi.org/",
"pattern": "^10\\.[0-9]{4}.+$",
"title": "Doi",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details.\nNote:\n Either **doi** or **url** have to be specified.",
"title": "Doi"
},
"url": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to cite (preferably specify a **doi** instead/also).\nNote:\n Either **doi** or **url** have to be specified.",
"title": "Url"
}
},
"required": [
"text"
],
"title": "generic.v0_3.CiteEntry",
"type": "object"
},
"bioimageio__spec__generic__v0_3__Config": {
"additionalProperties": true,
"description": "A place to store additional metadata (often tool specific).\n\nSuch additional metadata is typically set programmatically by the respective tool\nor by people with specific insights into the tool.\nIf you want to store additional metadata that does not match any of the other\nfields, think of a key unlikely to collide with anyone elses use-case/tool and save\nit here.\n\nPlease consider creating [an issue in the bioimageio.spec repository](https://github.com/bioimage-io/spec-bioimage-io/issues/new?template=Blank+issue)\nif you are not sure if an existing field could cover your use case\nor if you think such a field should exist.",
"properties": {
"bioimageio": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__BioimageioConfig"
}
},
"title": "generic.v0_3.Config",
"type": "object"
},
"bioimageio__spec__generic__v0_3__Maintainer": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Name"
},
"github_user": {
"title": "Github User",
"type": "string"
}
},
"required": [
"github_user"
],
"title": "generic.v0_3.Maintainer",
"type": "object"
},
"bioimageio__spec__model__v0_5__BioimageioConfig": {
"additionalProperties": true,
"properties": {
"reproducibility_tolerance": {
"default": [],
"description": "Tolerances to allow when reproducing the model's test outputs\nfrom the model's test inputs.\nOnly the first entry matching tensor id and weights format is considered.",
"items": {
"$ref": "#/$defs/ReproducibilityTolerance"
},
"title": "Reproducibility Tolerance",
"type": "array"
},
"funded_by": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Funding agency, grant number if applicable",
"title": "Funded By"
},
"architecture_type": {
"anyOf": [
{
"maxLength": 32,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Model architecture type, e.g., 3D U-Net, ResNet, transformer",
"title": "Architecture Type"
},
"architecture_description": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Text description of model architecture.",
"title": "Architecture Description"
},
"modality": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Input modality, e.g., fluorescence microscopy, electron microscopy",
"title": "Modality"
},
"target_structure": {
"description": "Biological structure(s) the model is designed to analyze, e.g., nuclei, mitochondria, cells",
"items": {
"type": "string"
},
"title": "Target Structure",
"type": "array"
},
"task": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Bioimage-specific task type, e.g., segmentation, classification, detection, denoising",
"title": "Task"
},
"new_version": {
"anyOf": [
{
"minLength": 1,
"title": "ModelId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A new version of this model exists with a different model id.",
"title": "New Version"
},
"out_of_scope_use": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Describe how the model may be misused in bioimage analysis contexts and what users should **not** do with the model.",
"title": "Out Of Scope Use"
},
"bias_risks_limitations": {
"$ref": "#/$defs/BiasRisksLimitations",
"description": "Description of known bias, risks, and technical limitations for in-scope model use."
},
"model_parameter_count": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "Total number of model parameters.",
"title": "Model Parameter Count"
},
"training": {
"$ref": "#/$defs/TrainingDetails",
"description": "Details on how the model was trained."
},
"inference_time": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Average inference time per image/tile. Specify hardware and image size. Multiple examples can be given.",
"title": "Inference Time"
},
"memory_requirements_inference": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "GPU memory needed for inference. Multiple examples with different image size can be given.",
"title": "Memory Requirements Inference"
},
"memory_requirements_training": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "GPU memory needed for training. Multiple examples with different image/batch sizes can be given.",
"title": "Memory Requirements Training"
},
"evaluations": {
"description": "Quantitative model evaluations.\n\nNote:\n At the moment we recommend to include only a single test dataset\n (with evaluation factors that may mark subsets of the dataset)\n to avoid confusion and make the presentation of results cleaner.",
"items": {
"$ref": "#/$defs/Evaluation"
},
"title": "Evaluations",
"type": "array"
},
"environmental_impact": {
"$ref": "#/$defs/EnvironmentalImpact",
"description": "Environmental considerations for model training and deployment"
}
},
"title": "model.v0_5.BioimageioConfig",
"type": "object"
},
"bioimageio__spec__model__v0_5__Config": {
"additionalProperties": true,
"properties": {
"bioimageio": {
"$ref": "#/$defs/bioimageio__spec__model__v0_5__BioimageioConfig"
}
},
"title": "model.v0_5.Config",
"type": "object"
}
},
"additionalProperties": false,
"description": "Specification of the fields used in a bioimage.io-compliant RDF to describe AI models with pretrained weights.\nThese fields are typically stored in a YAML file which we call a model resource description file (model RDF).",
"properties": {
"name": {
"description": "A human-readable name of this model.\nIt should be no longer than 64 characters\nand may only contain letter, number, underscore, minus, parentheses and spaces.\nWe recommend to chose a name that refers to the model's task and image modality.",
"maxLength": 128,
"minLength": 5,
"title": "Name",
"type": "string"
},
"description": {
"default": "",
"description": "A string containing a brief description.",
"maxLength": 1024,
"title": "Description",
"type": "string"
},
"covers": {
"description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1 or 1:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg')",
"examples": [
[
"cover.png"
]
],
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Covers",
"type": "array"
},
"id_emoji": {
"anyOf": [
{
"examples": [
"\ud83e\udd88",
"\ud83e\udda5"
],
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "UTF-8 emoji for display alongside the `id`.",
"title": "Id Emoji"
},
"authors": {
"description": "The authors are the creators of the model RDF and the primary points of contact.",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"title": "Authors",
"type": "array"
},
"attachments": {
"description": "file attachments",
"items": {
"$ref": "#/$defs/FileDescr"
},
"title": "Attachments",
"type": "array"
},
"cite": {
"description": "citations",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__CiteEntry"
},
"title": "Cite",
"type": "array"
},
"license": {
"anyOf": [
{
"enum": [
"0BSD",
"3D-Slicer-1.0",
"AAL",
"Abstyles",
"AdaCore-doc",
"Adobe-2006",
"Adobe-Display-PostScript",
"Adobe-Glyph",
"Adobe-Utopia",
"ADSL",
"AFL-1.1",
"AFL-1.2",
"AFL-2.0",
"AFL-2.1",
"AFL-3.0",
"Afmparse",
"AGPL-1.0-only",
"AGPL-1.0-or-later",
"AGPL-3.0-only",
"AGPL-3.0-or-later",
"Aladdin",
"AMD-newlib",
"AMDPLPA",
"AML",
"AML-glslang",
"AMPAS",
"ANTLR-PD",
"ANTLR-PD-fallback",
"any-OSI",
"any-OSI-perl-modules",
"Apache-1.0",
"Apache-1.1",
"Apache-2.0",
"APAFML",
"APL-1.0",
"App-s2p",
"APSL-1.0",
"APSL-1.1",
"APSL-1.2",
"APSL-2.0",
"Arphic-1999",
"Artistic-1.0",
"Artistic-1.0-cl8",
"Artistic-1.0-Perl",
"Artistic-2.0",
"Artistic-dist",
"Aspell-RU",
"ASWF-Digital-Assets-1.0",
"ASWF-Digital-Assets-1.1",
"Baekmuk",
"Bahyph",
"Barr",
"bcrypt-Solar-Designer",
"Beerware",
"Bitstream-Charter",
"Bitstream-Vera",
"BitTorrent-1.0",
"BitTorrent-1.1",
"blessing",
"BlueOak-1.0.0",
"Boehm-GC",
"Boehm-GC-without-fee",
"Borceux",
"Brian-Gladman-2-Clause",
"Brian-Gladman-3-Clause",
"BSD-1-Clause",
"BSD-2-Clause",
"BSD-2-Clause-Darwin",
"BSD-2-Clause-first-lines",
"BSD-2-Clause-Patent",
"BSD-2-Clause-pkgconf-disclaimer",
"BSD-2-Clause-Views",
"BSD-3-Clause",
"BSD-3-Clause-acpica",
"BSD-3-Clause-Attribution",
"BSD-3-Clause-Clear",
"BSD-3-Clause-flex",
"BSD-3-Clause-HP",
"BSD-3-Clause-LBNL",
"BSD-3-Clause-Modification",
"BSD-3-Clause-No-Military-License",
"BSD-3-Clause-No-Nuclear-License",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-No-Nuclear-Warranty",
"BSD-3-Clause-Open-MPI",
"BSD-3-Clause-Sun",
"BSD-4-Clause",
"BSD-4-Clause-Shortened",
"BSD-4-Clause-UC",
"BSD-4.3RENO",
"BSD-4.3TAHOE",
"BSD-Advertising-Acknowledgement",
"BSD-Attribution-HPND-disclaimer",
"BSD-Inferno-Nettverk",
"BSD-Protection",
"BSD-Source-beginning-file",
"BSD-Source-Code",
"BSD-Systemics",
"BSD-Systemics-W3Works",
"BSL-1.0",
"BUSL-1.1",
"bzip2-1.0.6",
"C-UDA-1.0",
"CAL-1.0",
"CAL-1.0-Combined-Work-Exception",
"Caldera",
"Caldera-no-preamble",
"Catharon",
"CATOSL-1.1",
"CC-BY-1.0",
"CC-BY-2.0",
"CC-BY-2.5",
"CC-BY-2.5-AU",
"CC-BY-3.0",
"CC-BY-3.0-AT",
"CC-BY-3.0-AU",
"CC-BY-3.0-DE",
"CC-BY-3.0-IGO",
"CC-BY-3.0-NL",
"CC-BY-3.0-US",
"CC-BY-4.0",
"CC-BY-NC-1.0",
"CC-BY-NC-2.0",
"CC-BY-NC-2.5",
"CC-BY-NC-3.0",
"CC-BY-NC-3.0-DE",
"CC-BY-NC-4.0",
"CC-BY-NC-ND-1.0",
"CC-BY-NC-ND-2.0",
"CC-BY-NC-ND-2.5",
"CC-BY-NC-ND-3.0",
"CC-BY-NC-ND-3.0-DE",
"CC-BY-NC-ND-3.0-IGO",
"CC-BY-NC-ND-4.0",
"CC-BY-NC-SA-1.0",
"CC-BY-NC-SA-2.0",
"CC-BY-NC-SA-2.0-DE",
"CC-BY-NC-SA-2.0-FR",
"CC-BY-NC-SA-2.0-UK",
"CC-BY-NC-SA-2.5",
"CC-BY-NC-SA-3.0",
"CC-BY-NC-SA-3.0-DE",
"CC-BY-NC-SA-3.0-IGO",
"CC-BY-NC-SA-4.0",
"CC-BY-ND-1.0",
"CC-BY-ND-2.0",
"CC-BY-ND-2.5",
"CC-BY-ND-3.0",
"CC-BY-ND-3.0-DE",
"CC-BY-ND-4.0",
"CC-BY-SA-1.0",
"CC-BY-SA-2.0",
"CC-BY-SA-2.0-UK",
"CC-BY-SA-2.1-JP",
"CC-BY-SA-2.5",
"CC-BY-SA-3.0",
"CC-BY-SA-3.0-AT",
"CC-BY-SA-3.0-DE",
"CC-BY-SA-3.0-IGO",
"CC-BY-SA-4.0",
"CC-PDDC",
"CC-PDM-1.0",
"CC-SA-1.0",
"CC0-1.0",
"CDDL-1.0",
"CDDL-1.1",
"CDL-1.0",
"CDLA-Permissive-1.0",
"CDLA-Permissive-2.0",
"CDLA-Sharing-1.0",
"CECILL-1.0",
"CECILL-1.1",
"CECILL-2.0",
"CECILL-2.1",
"CECILL-B",
"CECILL-C",
"CERN-OHL-1.1",
"CERN-OHL-1.2",
"CERN-OHL-P-2.0",
"CERN-OHL-S-2.0",
"CERN-OHL-W-2.0",
"CFITSIO",
"check-cvs",
"checkmk",
"ClArtistic",
"Clips",
"CMU-Mach",
"CMU-Mach-nodoc",
"CNRI-Jython",
"CNRI-Python",
"CNRI-Python-GPL-Compatible",
"COIL-1.0",
"Community-Spec-1.0",
"Condor-1.1",
"copyleft-next-0.3.0",
"copyleft-next-0.3.1",
"Cornell-Lossless-JPEG",
"CPAL-1.0",
"CPL-1.0",
"CPOL-1.02",
"Cronyx",
"Crossword",
"CryptoSwift",
"CrystalStacker",
"CUA-OPL-1.0",
"Cube",
"curl",
"cve-tou",
"D-FSL-1.0",
"DEC-3-Clause",
"diffmark",
"DL-DE-BY-2.0",
"DL-DE-ZERO-2.0",
"DOC",
"DocBook-DTD",
"DocBook-Schema",
"DocBook-Stylesheet",
"DocBook-XML",
"Dotseqn",
"DRL-1.0",
"DRL-1.1",
"DSDP",
"dtoa",
"dvipdfm",
"ECL-1.0",
"ECL-2.0",
"EFL-1.0",
"EFL-2.0",
"eGenix",
"Elastic-2.0",
"Entessa",
"EPICS",
"EPL-1.0",
"EPL-2.0",
"ErlPL-1.1",
"etalab-2.0",
"EUDatagrid",
"EUPL-1.0",
"EUPL-1.1",
"EUPL-1.2",
"Eurosym",
"Fair",
"FBM",
"FDK-AAC",
"Ferguson-Twofish",
"Frameworx-1.0",
"FreeBSD-DOC",
"FreeImage",
"FSFAP",
"FSFAP-no-warranty-disclaimer",
"FSFUL",
"FSFULLR",
"FSFULLRSD",
"FSFULLRWD",
"FSL-1.1-ALv2",
"FSL-1.1-MIT",
"FTL",
"Furuseth",
"fwlw",
"Game-Programming-Gems",
"GCR-docs",
"GD",
"generic-xts",
"GFDL-1.1-invariants-only",
"GFDL-1.1-invariants-or-later",
"GFDL-1.1-no-invariants-only",
"GFDL-1.1-no-invariants-or-later",
"GFDL-1.1-only",
"GFDL-1.1-or-later",
"GFDL-1.2-invariants-only",
"GFDL-1.2-invariants-or-later",
"GFDL-1.2-no-invariants-only",
"GFDL-1.2-no-invariants-or-later",
"GFDL-1.2-only",
"GFDL-1.2-or-later",
"GFDL-1.3-invariants-only",
"GFDL-1.3-invariants-or-later",
"GFDL-1.3-no-invariants-only",
"GFDL-1.3-no-invariants-or-later",
"GFDL-1.3-only",
"GFDL-1.3-or-later",
"Giftware",
"GL2PS",
"Glide",
"Glulxe",
"GLWTPL",
"gnuplot",
"GPL-1.0-only",
"GPL-1.0-or-later",
"GPL-2.0-only",
"GPL-2.0-or-later",
"GPL-3.0-only",
"GPL-3.0-or-later",
"Graphics-Gems",
"gSOAP-1.3b",
"gtkbook",
"Gutmann",
"HaskellReport",
"HDF5",
"hdparm",
"HIDAPI",
"Hippocratic-2.1",
"HP-1986",
"HP-1989",
"HPND",
"HPND-DEC",
"HPND-doc",
"HPND-doc-sell",
"HPND-export-US",
"HPND-export-US-acknowledgement",
"HPND-export-US-modify",
"HPND-export2-US",
"HPND-Fenneberg-Livingston",
"HPND-INRIA-IMAG",
"HPND-Intel",
"HPND-Kevlin-Henney",
"HPND-Markus-Kuhn",
"HPND-merchantability-variant",
"HPND-MIT-disclaimer",
"HPND-Netrek",
"HPND-Pbmplus",
"HPND-sell-MIT-disclaimer-xserver",
"HPND-sell-regexpr",
"HPND-sell-variant",
"HPND-sell-variant-MIT-disclaimer",
"HPND-sell-variant-MIT-disclaimer-rev",
"HPND-UC",
"HPND-UC-export-US",
"HTMLTIDY",
"IBM-pibs",
"ICU",
"IEC-Code-Components-EULA",
"IJG",
"IJG-short",
"ImageMagick",
"iMatix",
"Imlib2",
"Info-ZIP",
"Inner-Net-2.0",
"InnoSetup",
"Intel",
"Intel-ACPI",
"Interbase-1.0",
"IPA",
"IPL-1.0",
"ISC",
"ISC-Veillard",
"Jam",
"JasPer-2.0",
"jove",
"JPL-image",
"JPNIC",
"JSON",
"Kastrup",
"Kazlib",
"Knuth-CTAN",
"LAL-1.2",
"LAL-1.3",
"Latex2e",
"Latex2e-translated-notice",
"Leptonica",
"LGPL-2.0-only",
"LGPL-2.0-or-later",
"LGPL-2.1-only",
"LGPL-2.1-or-later",
"LGPL-3.0-only",
"LGPL-3.0-or-later",
"LGPLLR",
"Libpng",
"libpng-1.6.35",
"libpng-2.0",
"libselinux-1.0",
"libtiff",
"libutil-David-Nugent",
"LiLiQ-P-1.1",
"LiLiQ-R-1.1",
"LiLiQ-Rplus-1.1",
"Linux-man-pages-1-para",
"Linux-man-pages-copyleft",
"Linux-man-pages-copyleft-2-para",
"Linux-man-pages-copyleft-var",
"Linux-OpenIB",
"LOOP",
"LPD-document",
"LPL-1.0",
"LPL-1.02",
"LPPL-1.0",
"LPPL-1.1",
"LPPL-1.2",
"LPPL-1.3a",
"LPPL-1.3c",
"lsof",
"Lucida-Bitmap-Fonts",
"LZMA-SDK-9.11-to-9.20",
"LZMA-SDK-9.22",
"Mackerras-3-Clause",
"Mackerras-3-Clause-acknowledgment",
"magaz",
"mailprio",
"MakeIndex",
"man2html",
"Martin-Birgmeier",
"McPhee-slideshow",
"metamail",
"Minpack",
"MIPS",
"MirOS",
"MIT",
"MIT-0",
"MIT-advertising",
"MIT-Click",
"MIT-CMU",
"MIT-enna",
"MIT-feh",
"MIT-Festival",
"MIT-Khronos-old",
"MIT-Modern-Variant",
"MIT-open-group",
"MIT-testregex",
"MIT-Wu",
"MITNFA",
"MMIXware",
"Motosoto",
"MPEG-SSG",
"mpi-permissive",
"mpich2",
"MPL-1.0",
"MPL-1.1",
"MPL-2.0",
"MPL-2.0-no-copyleft-exception",
"mplus",
"MS-LPL",
"MS-PL",
"MS-RL",
"MTLL",
"MulanPSL-1.0",
"MulanPSL-2.0",
"Multics",
"Mup",
"NAIST-2003",
"NASA-1.3",
"Naumen",
"NBPL-1.0",
"NCBI-PD",
"NCGL-UK-2.0",
"NCL",
"NCSA",
"NetCDF",
"Newsletr",
"NGPL",
"ngrep",
"NICTA-1.0",
"NIST-PD",
"NIST-PD-fallback",
"NIST-Software",
"NLOD-1.0",
"NLOD-2.0",
"NLPL",
"Nokia",
"NOSL",
"Noweb",
"NPL-1.0",
"NPL-1.1",
"NPOSL-3.0",
"NRL",
"NTIA-PD",
"NTP",
"NTP-0",
"O-UDA-1.0",
"OAR",
"OCCT-PL",
"OCLC-2.0",
"ODbL-1.0",
"ODC-By-1.0",
"OFFIS",
"OFL-1.0",
"OFL-1.0-no-RFN",
"OFL-1.0-RFN",
"OFL-1.1",
"OFL-1.1-no-RFN",
"OFL-1.1-RFN",
"OGC-1.0",
"OGDL-Taiwan-1.0",
"OGL-Canada-2.0",
"OGL-UK-1.0",
"OGL-UK-2.0",
"OGL-UK-3.0",
"OGTSL",
"OLDAP-1.1",
"OLDAP-1.2",
"OLDAP-1.3",
"OLDAP-1.4",
"OLDAP-2.0",
"OLDAP-2.0.1",
"OLDAP-2.1",
"OLDAP-2.2",
"OLDAP-2.2.1",
"OLDAP-2.2.2",
"OLDAP-2.3",
"OLDAP-2.4",
"OLDAP-2.5",
"OLDAP-2.6",
"OLDAP-2.7",
"OLDAP-2.8",
"OLFL-1.3",
"OML",
"OpenPBS-2.3",
"OpenSSL",
"OpenSSL-standalone",
"OpenVision",
"OPL-1.0",
"OPL-UK-3.0",
"OPUBL-1.0",
"OSET-PL-2.1",
"OSL-1.0",
"OSL-1.1",
"OSL-2.0",
"OSL-2.1",
"OSL-3.0",
"PADL",
"Parity-6.0.0",
"Parity-7.0.0",
"PDDL-1.0",
"PHP-3.0",
"PHP-3.01",
"Pixar",
"pkgconf",
"Plexus",
"pnmstitch",
"PolyForm-Noncommercial-1.0.0",
"PolyForm-Small-Business-1.0.0",
"PostgreSQL",
"PPL",
"PSF-2.0",
"psfrag",
"psutils",
"Python-2.0",
"Python-2.0.1",
"python-ldap",
"Qhull",
"QPL-1.0",
"QPL-1.0-INRIA-2004",
"radvd",
"Rdisc",
"RHeCos-1.1",
"RPL-1.1",
"RPL-1.5",
"RPSL-1.0",
"RSA-MD",
"RSCPL",
"Ruby",
"Ruby-pty",
"SAX-PD",
"SAX-PD-2.0",
"Saxpath",
"SCEA",
"SchemeReport",
"Sendmail",
"Sendmail-8.23",
"Sendmail-Open-Source-1.1",
"SGI-B-1.0",
"SGI-B-1.1",
"SGI-B-2.0",
"SGI-OpenGL",
"SGP4",
"SHL-0.5",
"SHL-0.51",
"SimPL-2.0",
"SISSL",
"SISSL-1.2",
"SL",
"Sleepycat",
"SMAIL-GPL",
"SMLNJ",
"SMPPL",
"SNIA",
"snprintf",
"SOFA",
"softSurfer",
"Soundex",
"Spencer-86",
"Spencer-94",
"Spencer-99",
"SPL-1.0",
"ssh-keyscan",
"SSH-OpenSSH",
"SSH-short",
"SSLeay-standalone",
"SSPL-1.0",
"SugarCRM-1.1.3",
"SUL-1.0",
"Sun-PPP",
"Sun-PPP-2000",
"SunPro",
"SWL",
"swrule",
"Symlinks",
"TAPR-OHL-1.0",
"TCL",
"TCP-wrappers",
"TermReadKey",
"TGPPL-1.0",
"ThirdEye",
"threeparttable",
"TMate",
"TORQUE-1.1",
"TOSL",
"TPDL",
"TPL-1.0",
"TrustedQSL",
"TTWL",
"TTYP0",
"TU-Berlin-1.0",
"TU-Berlin-2.0",
"Ubuntu-font-1.0",
"UCAR",
"UCL-1.0",
"ulem",
"UMich-Merit",
"Unicode-3.0",
"Unicode-DFS-2015",
"Unicode-DFS-2016",
"Unicode-TOU",
"UnixCrypt",
"Unlicense",
"Unlicense-libtelnet",
"Unlicense-libwhirlpool",
"UPL-1.0",
"URT-RLE",
"Vim",
"VOSTROM",
"VSL-1.0",
"W3C",
"W3C-19980720",
"W3C-20150513",
"w3m",
"Watcom-1.0",
"Widget-Workshop",
"Wsuipa",
"WTFPL",
"wwl",
"X11",
"X11-distribute-modifications-variant",
"X11-swapped",
"Xdebug-1.03",
"Xerox",
"Xfig",
"XFree86-1.1",
"xinetd",
"xkeyboard-config-Zinoviev",
"xlock",
"Xnet",
"xpp",
"XSkat",
"xzoom",
"YPL-1.0",
"YPL-1.1",
"Zed",
"Zeeff",
"Zend-2.0",
"Zimbra-1.3",
"Zimbra-1.4",
"Zlib",
"zlib-acknowledgement",
"ZPL-1.1",
"ZPL-2.0",
"ZPL-2.1"
],
"title": "LicenseId",
"type": "string"
},
{
"enum": [
"AGPL-1.0",
"AGPL-3.0",
"BSD-2-Clause-FreeBSD",
"BSD-2-Clause-NetBSD",
"bzip2-1.0.5",
"eCos-2.0",
"GFDL-1.1",
"GFDL-1.2",
"GFDL-1.3",
"GPL-1.0",
"GPL-1.0+",
"GPL-2.0",
"GPL-2.0+",
"GPL-2.0-with-autoconf-exception",
"GPL-2.0-with-bison-exception",
"GPL-2.0-with-classpath-exception",
"GPL-2.0-with-font-exception",
"GPL-2.0-with-GCC-exception",
"GPL-3.0",
"GPL-3.0+",
"GPL-3.0-with-autoconf-exception",
"GPL-3.0-with-GCC-exception",
"LGPL-2.0",
"LGPL-2.0+",
"LGPL-2.1",
"LGPL-2.1+",
"LGPL-3.0",
"LGPL-3.0+",
"Net-SNMP",
"Nunit",
"StandardML-NJ",
"wxWindows"
],
"title": "DeprecatedLicenseId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose)\nto discuss your intentions with the community.",
"examples": [
"CC0-1.0",
"MIT",
"BSD-2-Clause"
],
"title": "License"
},
"git_repo": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A URL to the Git repository where the resource is being developed.",
"examples": [
"https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
],
"title": "Git Repo"
},
"icon": {
"anyOf": [
{
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An icon for illustration, e.g. on bioimage.io",
"title": "Icon"
},
"links": {
"description": "IDs of other bioimage.io resources",
"examples": [
[
"ilastik/ilastik",
"deepimagej/deepimagej",
"zero/notebook_u-net_3d_zerocostdl4mic"
]
],
"items": {
"type": "string"
},
"title": "Links",
"type": "array"
},
"uploader": {
"anyOf": [
{
"$ref": "#/$defs/Uploader"
},
{
"type": "null"
}
],
"default": null,
"description": "The person who uploaded the model (e.g. to bioimage.io)"
},
"maintainers": {
"description": "Maintainers of this resource.\nIf not specified, `authors` are maintainers and at least some of them has to specify their `github_user` name",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Maintainer"
},
"title": "Maintainers",
"type": "array"
},
"tags": {
"description": "Associated tags",
"examples": [
[
"unet2d",
"pytorch",
"nucleus",
"segmentation",
"dsb2018"
]
],
"items": {
"type": "string"
},
"title": "Tags",
"type": "array"
},
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the resource following SemVer 2.0."
},
"version_comment": {
"anyOf": [
{
"maxLength": 512,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A comment on the version of the resource.",
"title": "Version Comment"
},
"format_version": {
"const": "0.5.7",
"description": "Version of the bioimage.io model description specification used.\nWhen creating a new model always use the latest micro/patch version described here.\nThe `format_version` is important for any consumer software to understand how to parse the fields.",
"title": "Format Version",
"type": "string"
},
"type": {
"const": "model",
"description": "Specialized resource type 'model'",
"title": "Type",
"type": "string"
},
"id": {
"anyOf": [
{
"minLength": 1,
"title": "ModelId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
"title": "Id"
},
"documentation": {
"anyOf": [
{
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"examples": [
"https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
"README.md"
]
},
{
"type": "null"
}
],
"default": null,
"description": "URL or relative path to a markdown file with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.\nThe documentation should include a '#[#] Validation' (sub)section\nwith details on how to quantitatively validate the model on unseen data.",
"title": "Documentation"
},
"inputs": {
"description": "Describes the input tensors expected by this model.",
"items": {
"$ref": "#/$defs/InputTensorDescr"
},
"minItems": 1,
"title": "Inputs",
"type": "array"
},
"outputs": {
"description": "Describes the output tensors.",
"items": {
"$ref": "#/$defs/OutputTensorDescr"
},
"minItems": 1,
"title": "Outputs",
"type": "array"
},
"packaged_by": {
"description": "The persons that have packaged and uploaded this model.\nOnly required if those persons differ from the `authors`.",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"title": "Packaged By",
"type": "array"
},
"parent": {
"anyOf": [
{
"$ref": "#/$defs/LinkedModel"
},
{
"type": "null"
}
],
"default": null,
"description": "The model from which this model is derived, e.g. by fine-tuning the weights."
},
"run_mode": {
"anyOf": [
{
"$ref": "#/$defs/RunMode"
},
{
"type": "null"
}
],
"default": null,
"description": "Custom run mode for this model: for more complex prediction procedures like test time\ndata augmentation that currently cannot be expressed in the specification.\nNo standard run modes are defined yet."
},
"timestamp": {
"$ref": "#/$defs/Datetime",
"description": "Timestamp in [ISO 8601](#https://en.wikipedia.org/wiki/ISO_8601) format\nwith a few restrictions listed [here](https://docs.python.org/3/library/datetime.html#datetime.datetime.fromisoformat).\n(In Python a datetime object is valid, too)."
},
"training_data": {
"anyOf": [
{
"$ref": "#/$defs/LinkedDataset"
},
{
"$ref": "#/$defs/bioimageio__spec__dataset__v0_3__DatasetDescr"
},
{
"$ref": "#/$defs/bioimageio__spec__dataset__v0_2__DatasetDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "The dataset used to train this model",
"title": "Training Data"
},
"weights": {
"$ref": "#/$defs/WeightsDescr",
"description": "The weights for this model.\nWeights can be given for different formats, but should otherwise be equivalent.\nThe available weight formats determine which consumers can use this model."
},
"config": {
"$ref": "#/$defs/bioimageio__spec__model__v0_5__Config"
}
},
"required": [
"name",
"format_version",
"type",
"inputs",
"outputs",
"weights"
],
"title": "model 0.5.7",
"type": "object"
}
Fields:
-
_validation_summary(Optional[ValidationSummary]) -
description(FAIR[Annotated[str, MaxLen(1024), warn(MaxLen(512), 'Description longer than 512 characters.')]]) -
covers(List[FileSource_cover]) -
id_emoji(Optional[Annotated[str, Len(min_length=1, max_length=2), Field(examples=['🦈', '🦥'])]]) -
attachments(List[FileDescr_]) -
cite(FAIR[List[CiteEntry]]) -
license(FAIR[Annotated[Annotated[Union[LicenseId, DeprecatedLicenseId, None], Field(union_mode='left_to_right')], warn(Optional[LicenseId], '{value} is deprecated, see https://spdx.org/licenses/{value}.html'), Field(examples=['CC0-1.0', 'MIT', 'BSD-2-Clause'])]]) -
git_repo(Annotated[Optional[HttpUrl], Field(examples=['https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad'])]) -
icon(Union[Annotated[str, Len(min_length=1, max_length=2)], FileSource_, None]) -
links(Annotated[List[str], Field(examples=[('ilastik/ilastik', 'deepimagej/deepimagej', 'zero/notebook_u-net_3d_zerocostdl4mic')])]) -
uploader(Optional[Uploader]) -
maintainers(List[Maintainer]) -
tags(FAIR[Annotated[List[str], Field(examples=[('unet2d', 'pytorch', 'nucleus', 'segmentation', 'dsb2018')])]]) -
version(Optional[Version]) -
version_comment(Optional[Annotated[str, MaxLen(512)]]) -
format_version(Literal['0.5.7']) -
type(Literal['model']) -
id(Optional[ModelId]) -
authors(FAIR[List[Author]]) -
documentation(FAIR[Optional[FileSource_documentation]]) -
inputs(NotEmpty[Sequence[InputTensorDescr]]) -
name(Annotated[str, RestrictCharacters(string.ascii_letters + string.digits + '_+- ()'), MinLen(5), MaxLen(128), warn(MaxLen(64), 'Name longer than 64 characters.', INFO)]) -
outputs(NotEmpty[Sequence[OutputTensorDescr]]) -
packaged_by(List[Author]) -
parent(Optional[LinkedModel]) -
run_mode(Annotated[Optional[RunMode], warn(None, "Run mode '{value}' has limited support across consumer softwares.")]) -
timestamp(Datetime) -
training_data(Annotated[Union[None, LinkedDataset, DatasetDescr, DatasetDescr02], Field(union_mode='left_to_right')]) -
weights(Annotated[WeightsDescr, WrapSerializer(package_weights)]) -
config(Config)
Validators:
-
_check_maintainers_exist -
warn_about_tag_categories→tags -
_remove_version_number -
_validate_documentation→documentation -
_validate_input_axes→inputs -
_validate_test_tensors -
_validate_tensor_references_in_proc_kwargs -
_validate_tensor_ids→outputs -
_validate_output_axes→outputs -
_validate_parent_is_not_self -
_add_default_cover -
_convert
authors
pydantic-field
¤
authors: FAIR[List[Author]]
The authors are the creators of the model RDF and the primary points of contact.
description
pydantic-field
¤
description: FAIR[
Annotated[
str,
MaxLen(1024),
warn(
MaxLen(512),
"Description longer than 512 characters.",
),
]
] = ""
A string containing a brief description.
documentation
pydantic-field
¤
documentation: FAIR[Optional[FileSource_documentation]] = (
None
)
URL or relative path to a markdown file with additional documentation.
The recommended documentation file name is README.md. An .md suffix is mandatory.
The documentation should include a '#[#] Validation' (sub)section
with details on how to quantitatively validate the model on unseen data.
file_name
property
¤
file_name: Optional[FileName]
File name of the bioimageio.yaml file the description was loaded from.
git_repo
pydantic-field
¤
git_repo: Annotated[
Optional[HttpUrl],
Field(
examples=[
"https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
]
),
] = None
A URL to the Git repository where the resource is being developed.
icon
pydantic-field
¤
icon: Union[
Annotated[str, Len(min_length=1, max_length=2)],
FileSource_,
None,
] = None
An icon for illustration, e.g. on bioimage.io
id
pydantic-field
¤
id: Optional[ModelId] = None
bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.
id_emoji
pydantic-field
¤
id_emoji: Optional[
Annotated[
str,
Len(min_length=1, max_length=2),
Field(examples=["🦈", "🦥"]),
]
] = None
UTF-8 emoji for display alongside the id.
implemented_format_version_tuple
class-attribute
¤
implemented_format_version_tuple: Tuple[int, int, int]
inputs
pydantic-field
¤
inputs: NotEmpty[Sequence[InputTensorDescr]]
Describes the input tensors expected by this model.
license
pydantic-field
¤
license: FAIR[
Annotated[
Annotated[
Union[LicenseId, DeprecatedLicenseId, None],
Field(union_mode="left_to_right"),
],
warn(
Optional[LicenseId],
"{value} is deprecated, see https://spdx.org/licenses/{value}.html",
),
Field(examples=["CC0-1.0", "MIT", "BSD-2-Clause"]),
]
] = None
A SPDX license identifier. We do not support custom license beyond the SPDX license list, if you need that please open a GitHub issue to discuss your intentions with the community.
links
pydantic-field
¤
links: Annotated[
List[str],
Field(
examples=[
(
"ilastik/ilastik",
"deepimagej/deepimagej",
"zero/notebook_u-net_3d_zerocostdl4mic",
)
]
),
]
IDs of other bioimage.io resources
maintainers
pydantic-field
¤
maintainers: List[Maintainer]
Maintainers of this resource.
If not specified, authors are maintainers and at least some of them has to specify their github_user name
name
pydantic-field
¤
name: Annotated[
str,
RestrictCharacters(
string.ascii_letters + string.digits + "_+- ()"
),
MinLen(5),
MaxLen(128),
warn(
MaxLen(64), "Name longer than 64 characters.", INFO
),
]
A human-readable name of this model. It should be no longer than 64 characters and may only contain letter, number, underscore, minus, parentheses and spaces. We recommend to chose a name that refers to the model's task and image modality.
outputs
pydantic-field
¤
outputs: NotEmpty[Sequence[OutputTensorDescr]]
Describes the output tensors.
packaged_by
pydantic-field
¤
packaged_by: List[Author]
The persons that have packaged and uploaded this model.
Only required if those persons differ from the authors.
parent
pydantic-field
¤
parent: Optional[LinkedModel] = None
The model from which this model is derived, e.g. by fine-tuning the weights.
root
property
¤
root: Union[RootHttpUrl, DirectoryPath, ZipFile]
The URL/Path prefix to resolve any relative paths with.
run_mode
pydantic-field
¤
run_mode: Annotated[
Optional[RunMode],
warn(
None,
"Run mode '{value}' has limited support across consumer softwares.",
),
] = None
Custom run mode for this model: for more complex prediction procedures like test time data augmentation that currently cannot be expressed in the specification. No standard run modes are defined yet.
tags
pydantic-field
¤
tags: FAIR[
Annotated[
List[str],
Field(
examples=[
(
"unet2d",
"pytorch",
"nucleus",
"segmentation",
"dsb2018",
)
]
),
]
]
Associated tags
training_data
pydantic-field
¤
training_data: Annotated[
Union[
None, LinkedDataset, DatasetDescr, DatasetDescr02
],
Field(union_mode="left_to_right"),
] = None
The dataset used to train this model
uploader
pydantic-field
¤
uploader: Optional[Uploader] = None
The person who uploaded the model (e.g. to bioimage.io)
version
pydantic-field
¤
version: Optional[Version] = None
The version of the resource following SemVer 2.0.
version_comment
pydantic-field
¤
version_comment: Optional[Annotated[str, MaxLen(512)]] = (
None
)
A comment on the version of the resource.
weights
pydantic-field
¤
weights: Annotated[
WeightsDescr, WrapSerializer(package_weights)
]
The weights for this model. Weights can be given for different formats, but should otherwise be equivalent. The available weight formats determine which consumers can use this model.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any)
Source code in src/bioimageio/spec/_internal/common_nodes.py
200 201 202 203 204 205 206 207 208 209 210 211 212 | |
convert_from_old_format_wo_validation
classmethod
¤
convert_from_old_format_wo_validation(
data: Dict[str, Any],
) -> None
Convert metadata following an older format version to this classes' format without validating the result.
Source code in src/bioimageio/spec/model/v0_5.py
3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 | |
get_axis_sizes
¤
get_axis_sizes(
ns: Mapping[
Tuple[TensorId, AxisId], ParameterizedSize_N
],
batch_size: Optional[int] = None,
*,
max_input_shape: Optional[
Mapping[Tuple[TensorId, AxisId], int]
] = None,
) -> _AxisSizes
Determine input and output block shape for scale factors ns of parameterized input sizes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Mapping[Tuple[TensorId, AxisId], ParameterizedSize_N]
|
Scale factor |
required |
|
Optional[int]
|
The desired size of the batch dimension. If given batch_size overwrites any batch size present in max_input_shape. Default 1. |
None
|
|
Optional[Mapping[Tuple[TensorId, AxisId], int]]
|
Limits the derived block shapes.
Each axis for which the input size, parameterized by |
None
|
Returns:
| Type | Description |
|---|---|
_AxisSizes
|
Resolved axis sizes for model inputs and outputs. |
Source code in src/bioimageio/spec/model/v0_5.py
3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 | |
get_batch_size
staticmethod
¤
Source code in src/bioimageio/spec/model/v0_5.py
3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 | |
get_input_test_arrays
¤
get_input_test_arrays() -> List[NDArray[Any]]
Source code in src/bioimageio/spec/model/v0_5.py
3554 3555 | |
get_ns
¤
get parameter n for each parameterized axis
such that the valid input size is >= the given input size
Source code in src/bioimageio/spec/model/v0_5.py
3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 | |
get_output_tensor_sizes
¤
get_output_tensor_sizes(
input_sizes: Mapping[TensorId, Mapping[AxisId, int]],
) -> Dict[TensorId, Dict[AxisId, Union[int, _DataDepSize]]]
Returns the tensor output sizes for given input_sizes. Only if input_sizes has a valid input shape, the tensor output size is exact. Otherwise it might be larger than the actual (valid) output
Source code in src/bioimageio/spec/model/v0_5.py
3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 | |
get_output_test_arrays
¤
get_output_test_arrays() -> List[NDArray[Any]]
Source code in src/bioimageio/spec/model/v0_5.py
3557 3558 | |
get_package_content
¤
get_package_content() -> Dict[
FileName, Union[FileDescr, BioimageioYamlContent]
]
Returns package content without creating the package.
Source code in src/bioimageio/spec/_internal/common_nodes.py
378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 | |
get_tensor_sizes
¤
get_tensor_sizes(
ns: Mapping[
Tuple[TensorId, AxisId], ParameterizedSize_N
],
batch_size: int,
) -> _TensorSizes
Source code in src/bioimageio/spec/model/v0_5.py
3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 | |
load
classmethod
¤
load(
data: BioimageioYamlContentView,
context: Optional[ValidationContext] = None,
) -> Union[Self, InvalidDescr]
factory method to create a resource description object
Source code in src/bioimageio/spec/_internal/common_nodes.py
214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
package
¤
package(
dest: Optional[
Union[ZipFile, IO[bytes], Path, str]
] = None,
) -> ZipFile
package the described resource as a zip archive
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Optional[Union[ZipFile, IO[bytes], Path, str]]
|
(path/bytes stream of) destination zipfile |
None
|
Source code in src/bioimageio/spec/_internal/common_nodes.py
348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 | |
warn_about_tag_categories
pydantic-validator
¤
warn_about_tag_categories(
value: List[str], info: ValidationInfo
) -> List[str]
Source code in src/bioimageio/spec/generic/v0_3.py
384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 | |
ModelId
¤
Bases: ResourceId
flowchart TD
bioimageio.spec.model.v0_5.ModelId[ModelId]
bioimageio.spec.generic.v0_3.ResourceId[ResourceId]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec.generic.v0_3.ResourceId --> bioimageio.spec.model.v0_5.ModelId
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.generic.v0_3.ResourceId
click bioimageio.spec.model.v0_5.ModelId href "" "bioimageio.spec.model.v0_5.ModelId"
click bioimageio.spec.generic.v0_3.ResourceId href "" "bioimageio.spec.generic.v0_3.ResourceId"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
Methods:
| Name | Description |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
Attributes:
| Name | Type | Description |
|---|---|---|
root_model |
Type[RootModel[Any]]
|
the pydantic root model to validate the string |
root_model
class-attribute
¤
root_model: Type[RootModel[Any]] = RootModel[
Annotated[
NotEmpty[str],
RestrictCharacters(
string.ascii_lowercase + string.digits + "_-/."
),
annotated_types.Predicate(
lambda s: (
not (s.startswith("/") or s.endswith("/"))
)
),
]
]
the pydantic root model to validate the string
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
NominalOrOrdinalDataDescr
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"values": {
"anyOf": [
{
"items": {
"type": "integer"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "boolean"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "string"
},
"minItems": 1,
"type": "array"
}
],
"description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
"title": "Values"
},
"type": {
"default": "uint8",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"examples": [
"float32",
"uint8",
"uint16",
"int64",
"bool"
],
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
}
},
"required": [
"values"
],
"title": "model.v0_5.NominalOrOrdinalDataDescr",
"type": "object"
}
Fields:
-
values(TVs) -
type(Annotated[NominalOrOrdinalDType, Field(examples=['float32', 'uint8', 'uint16', 'int64', 'bool'])]) -
unit(Optional[Union[Literal['arbitrary unit'], SiUnit]])
Validators:
-
_validate_values_match_type
type
pydantic-field
¤
type: Annotated[
NominalOrOrdinalDType,
Field(
examples=[
"float32",
"uint8",
"uint16",
"int64",
"bool",
]
),
] = "uint8"
values
pydantic-field
¤
values: TVs
A fixed set of nominal or an ascending sequence of ordinal values.
In this case data.type is required to be an unsigend integer type, e.g. 'uint8'.
String values are interpreted as labels for tensor values 0, ..., N.
Note: as YAML 1.2 does not natively support a "set" datatype,
nominal values should be given as a sequence (aka list/array) as well.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
OnnxWeightsDescr
pydantic-model
¤
Bases: WeightsEntryDescrBase
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"opset_version": {
"description": "ONNX opset version",
"minimum": 7,
"title": "Opset Version",
"type": "integer"
},
"external_data": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "weights.onnx.data"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Source of the external ONNX data file holding the weights.\n(If present **source** holds the ONNX architecture without weights)."
}
},
"required": [
"source",
"opset_version"
],
"title": "model.v0_5.OnnxWeightsDescr",
"type": "object"
}
Fields:
-
source(Annotated[FileSource, AfterValidator(wo_special_file_name)]) -
sha256(Optional[Sha256]) -
authors(Optional[List[Author]]) -
parent(Annotated[Optional[WeightsFormat], Field(examples=['pytorch_state_dict'])]) -
comment(str) -
opset_version(Annotated[int, Ge(7)]) -
external_data(Optional[FileDescr_external_data])
Validators:
-
_validate_sha256 -
_validate -
_validate_external_data_unique_file_name
authors
pydantic-field
¤
authors: Optional[List[Author]] = None
Authors
Either the person(s) that have trained this model resulting in the original weights file.
(If this is the initial weights entry, i.e. it does not have a parent)
Or the person(s) who have converted the weights to this weights format.
(If this is a child weight, i.e. it has a parent field)
comment
pydantic-field
¤
comment: str = ''
A comment about this weights entry, for example how these weights were created.
external_data
pydantic-field
¤
external_data: Optional[FileDescr_external_data] = None
Source of the external ONNX data file holding the weights. (If present source holds the ONNX architecture without weights).
parent
pydantic-field
¤
parent: Annotated[
Optional[WeightsFormat],
Field(examples=["pytorch_state_dict"]),
] = None
The source weights these weights were converted from.
For example, if a model's weights were converted from the pytorch_state_dict format to torchscript,
The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights.
All weight entries except one (the initial set of weights resulting from training the model),
need to have this field.
source
pydantic-field
¤
source: Annotated[
FileSource, AfterValidator(wo_special_file_name)
]
Source of the weights file.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
310 311 312 313 314 315 316 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
302 303 304 305 306 307 308 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | |
OrcidId
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.OrcidId[OrcidId]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.OrcidId
click bioimageio.spec.model.v0_5.OrcidId href "" "bioimageio.spec.model.v0_5.OrcidId"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
An ORCID identifier, see https://orcid.org/
Methods:
| Name | Description |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
Attributes:
| Name | Type | Description |
|---|---|---|
root_model |
Type[RootModel[Any]]
|
the pydantic root model to validate the string |
root_model
class-attribute
¤
the pydantic root model to validate the string
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
OutputTensorDescr
pydantic-model
¤
Bases: TensorDescrBase[OutputAxis]
Show JSON schema:
{
"$defs": {
"BatchAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "batch",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "batch",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"const": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
"title": "Size"
}
},
"required": [
"type"
],
"title": "model.v0_5.BatchAxis",
"type": "object"
},
"BinarizeAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for [BinarizeDescr][]",
"properties": {
"threshold": {
"description": "The fixed threshold values along `axis`",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Threshold",
"type": "array"
},
"axis": {
"description": "The `threshold` axis",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"threshold",
"axis"
],
"title": "model.v0_5.BinarizeAlongAxisKwargs",
"type": "object"
},
"BinarizeDescr": {
"additionalProperties": false,
"description": "Binarize the tensor with a fixed threshold.\n\nValues above [BinarizeKwargs.threshold][]/[BinarizeAlongAxisKwargs.threshold][]\nwill be set to one, values below the threshold to zero.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: binarize\n kwargs:\n axis: 'channel'\n threshold: [0.25, 0.5, 0.75]\n ```\n- in Python:\n >>> postprocessing = [BinarizeDescr(\n ... kwargs=BinarizeAlongAxisKwargs(\n ... axis=AxisId('channel'),\n ... threshold=[0.25, 0.5, 0.75],\n ... )\n ... )]",
"properties": {
"id": {
"const": "binarize",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/BinarizeKwargs"
},
{
"$ref": "#/$defs/BinarizeAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.BinarizeDescr",
"type": "object"
},
"BinarizeKwargs": {
"additionalProperties": false,
"description": "key word arguments for [BinarizeDescr][]",
"properties": {
"threshold": {
"description": "The fixed threshold",
"title": "Threshold",
"type": "number"
}
},
"required": [
"threshold"
],
"title": "model.v0_5.BinarizeKwargs",
"type": "object"
},
"ChannelAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "channel",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "channel",
"title": "Type",
"type": "string"
},
"channel_names": {
"items": {
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"minItems": 1,
"title": "Channel Names",
"type": "array"
}
},
"required": [
"type",
"channel_names"
],
"title": "model.v0_5.ChannelAxis",
"type": "object"
},
"ClipDescr": {
"additionalProperties": false,
"description": "Set tensor values below min to min and above max to max.\n\nSee `ScaleRangeDescr` for examples.",
"properties": {
"id": {
"const": "clip",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ClipKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ClipDescr",
"type": "object"
},
"ClipKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ClipDescr][]",
"properties": {
"min": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Minimum value for clipping.\n\nExclusive with [min_percentile][]",
"title": "Min"
},
"min_percentile": {
"anyOf": [
{
"exclusiveMaximum": 100,
"minimum": 0,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Minimum percentile for clipping.\n\nExclusive with [min][].\n\nIn range [0, 100).",
"title": "Min Percentile"
},
"max": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Maximum value for clipping.\n\nExclusive with `max_percentile`.",
"title": "Max"
},
"max_percentile": {
"anyOf": [
{
"exclusiveMinimum": 1,
"maximum": 100,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Maximum percentile for clipping.\n\nExclusive with `max`.\n\nIn range (1, 100].",
"title": "Max Percentile"
},
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to determine percentiles jointly,\n\ni.e. axes to reduce to compute min/max from `min_percentile`/`max_percentile`.\nFor example to clip 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape with clipped values per channel, specify `axes=('batch', 'x', 'y')`.\nTo clip samples independently, leave out the 'batch' axis.\n\nOnly valid if `min_percentile` and/or `max_percentile` are set.\n\nDefault: Compute percentiles over all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
}
},
"title": "model.v0_5.ClipKwargs",
"type": "object"
},
"DataDependentSize": {
"additionalProperties": false,
"properties": {
"min": {
"default": 1,
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"max": {
"anyOf": [
{
"exclusiveMinimum": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Max"
}
},
"title": "model.v0_5.DataDependentSize",
"type": "object"
},
"EnsureDtypeDescr": {
"additionalProperties": false,
"description": "Cast the tensor data type to `EnsureDtypeKwargs.dtype` (if not matching).\n\nThis can for example be used to ensure the inner neural network model gets a\ndifferent input tensor data type than the fully described bioimage.io model does.\n\nExamples:\n The described bioimage.io model (incl. preprocessing) accepts any\n float32-compatible tensor, normalizes it with percentiles and clipping and then\n casts it to uint8, which is what the neural network in this example expects.\n - in YAML\n ```yaml\n inputs:\n - data:\n type: float32 # described bioimage.io model is compatible with any float32 input tensor\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n - id: ensure_dtype # the neural network of the model requires uint8\n kwargs:\n dtype: uint8\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(kwargs=ClipKwargs(min=0.0, max=1.0)),\n ... EnsureDtypeDescr(kwargs=EnsureDtypeKwargs(dtype=\"uint8\")),\n ... ]",
"properties": {
"id": {
"const": "ensure_dtype",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/EnsureDtypeKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.EnsureDtypeDescr",
"type": "object"
},
"EnsureDtypeKwargs": {
"additionalProperties": false,
"description": "key word arguments for [EnsureDtypeDescr][]",
"properties": {
"dtype": {
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"title": "Dtype",
"type": "string"
}
},
"required": [
"dtype"
],
"title": "model.v0_5.EnsureDtypeKwargs",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"FixedZeroMeanUnitVarianceAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
"properties": {
"mean": {
"description": "The mean value(s) to normalize with.",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Mean",
"type": "array"
},
"std": {
"description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
"items": {
"minimum": 1e-06,
"type": "number"
},
"minItems": 1,
"title": "Std",
"type": "array"
},
"axis": {
"description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
"examples": [
"channel",
"index"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"mean",
"std",
"axis"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
"type": "object"
},
"FixedZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract a given mean and divide by the standard deviation.\n\nNormalize with fixed, precomputed values for\n`FixedZeroMeanUnitVarianceKwargs.mean` and `FixedZeroMeanUnitVarianceKwargs.std`\nUse `FixedZeroMeanUnitVarianceAlongAxisKwargs` for independent scaling along given\naxes.\n\nExamples:\n1. scalar value for whole tensor\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n mean: 103.5\n std: 13.7\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7)\n ... )]\n\n2. independently along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n axis: channel\n mean: [101.5, 102.5, 103.5]\n std: [11.7, 12.7, 13.7]\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... mean=[101.5, 102.5, 103.5],\n ... std=[11.7, 12.7, 13.7],\n ... )\n ... )]",
"properties": {
"id": {
"const": "fixed_zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceKwargs"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceDescr",
"type": "object"
},
"FixedZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for [FixedZeroMeanUnitVarianceDescr][]",
"properties": {
"mean": {
"description": "The mean value to normalize with.",
"title": "Mean",
"type": "number"
},
"std": {
"description": "The standard deviation value to normalize with.",
"minimum": 1e-06,
"title": "Std",
"type": "number"
}
},
"required": [
"mean",
"std"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
"type": "object"
},
"IndexOutputAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
},
{
"$ref": "#/$defs/DataDependentSize"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset ([SizeReference][])\n- data dependent size using [DataDependentSize][] (size is only known after model inference)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
}
},
"required": [
"type",
"size"
],
"title": "model.v0_5.IndexOutputAxis",
"type": "object"
},
"IntervalOrRatioDataDescr": {
"additionalProperties": false,
"properties": {
"type": {
"default": "float32",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64"
],
"examples": [
"float32",
"float64",
"uint8",
"uint16"
],
"title": "Type",
"type": "string"
},
"range": {
"default": [
null,
null
],
"description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
"maxItems": 2,
"minItems": 2,
"prefixItems": [
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
},
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
}
],
"title": "Range",
"type": "array"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
}
],
"default": "arbitrary unit",
"title": "Unit"
},
"scale": {
"default": 1.0,
"description": "Scale for data on an interval (or ratio) scale.",
"title": "Scale",
"type": "number"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Offset for data on a ratio scale.",
"title": "Offset"
}
},
"title": "model.v0_5.IntervalOrRatioDataDescr",
"type": "object"
},
"NominalOrOrdinalDataDescr": {
"additionalProperties": false,
"properties": {
"values": {
"anyOf": [
{
"items": {
"type": "integer"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "boolean"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "string"
},
"minItems": 1,
"type": "array"
}
],
"description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
"title": "Values"
},
"type": {
"default": "uint8",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"examples": [
"float32",
"uint8",
"uint16",
"int64",
"bool"
],
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
}
},
"required": [
"values"
],
"title": "model.v0_5.NominalOrOrdinalDataDescr",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"ScaleLinearAlongAxisKwargs": {
"additionalProperties": false,
"description": "Key word arguments for [ScaleLinearDescr][]",
"properties": {
"axis": {
"description": "The axis of gain and offset values.",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"gain": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 0.0,
"description": "additive term",
"title": "Offset"
}
},
"required": [
"axis"
],
"title": "model.v0_5.ScaleLinearAlongAxisKwargs",
"type": "object"
},
"ScaleLinearDescr": {
"additionalProperties": false,
"description": "Fixed linear scaling.\n\nExamples:\n 1. Scale with scalar gain and offset\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n gain: 2.0\n offset: 3.0\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0))\n ... ]\n\n 2. Independent scaling along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n axis: 'channel'\n gain: [1.0, 2.0, 3.0]\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(\n ... kwargs=ScaleLinearAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... gain=[1.0, 2.0, 3.0],\n ... )\n ... )\n ... ]",
"properties": {
"id": {
"const": "scale_linear",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/ScaleLinearKwargs"
},
{
"$ref": "#/$defs/ScaleLinearAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ScaleLinearDescr",
"type": "object"
},
"ScaleLinearKwargs": {
"additionalProperties": false,
"description": "Key word arguments for [ScaleLinearDescr][]",
"properties": {
"gain": {
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain",
"type": "number"
},
"offset": {
"default": 0.0,
"description": "additive term",
"title": "Offset",
"type": "number"
}
},
"title": "model.v0_5.ScaleLinearKwargs",
"type": "object"
},
"ScaleMeanVarianceDescr": {
"additionalProperties": false,
"description": "Scale a tensor's data distribution to match another tensor's mean/std.\n`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
"properties": {
"id": {
"const": "scale_mean_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleMeanVarianceKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ScaleMeanVarianceDescr",
"type": "object"
},
"ScaleMeanVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ScaleMeanVarianceKwargs][]",
"properties": {
"reference_tensor": {
"description": "Name of tensor to match.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability:\n`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"reference_tensor"
],
"title": "model.v0_5.ScaleMeanVarianceKwargs",
"type": "object"
},
"ScaleRangeDescr": {
"additionalProperties": false,
"description": "Scale with percentiles.\n\nExamples:\n1. Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... )\n ... ]\n\n 2. Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: scale_range\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(\n ... kwargs=ClipKwargs(\n ... min=0.0,\n ... max=1.0,\n ... )\n ... ),\n ... ]",
"properties": {
"id": {
"const": "scale_range",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleRangeKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ScaleRangeDescr",
"type": "object"
},
"ScaleRangeKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ScaleRangeDescr][]\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"min_percentile": {
"default": 0.0,
"description": "The lower percentile used to determine the value to align with zero.",
"exclusiveMaximum": 100,
"minimum": 0,
"title": "Min Percentile",
"type": "number"
},
"max_percentile": {
"default": 100.0,
"description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
"exclusiveMinimum": 1,
"maximum": 100,
"title": "Max Percentile",
"type": "number"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
},
"reference_tensor": {
"anyOf": [
{
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Tensor ID to compute the percentiles from. Default: The tensor itself.\nFor any tensor in `inputs` only input tensor references are allowed.",
"title": "Reference Tensor"
}
},
"title": "model.v0_5.ScaleRangeKwargs",
"type": "object"
},
"SigmoidDescr": {
"additionalProperties": false,
"description": "The logistic sigmoid function, a.k.a. expit function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: sigmoid\n ```\n- in Python:\n >>> postprocessing = [SigmoidDescr()]",
"properties": {
"id": {
"const": "sigmoid",
"title": "Id",
"type": "string"
}
},
"required": [
"id"
],
"title": "model.v0_5.SigmoidDescr",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
},
"SoftmaxDescr": {
"additionalProperties": false,
"description": "The softmax function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: softmax\n kwargs:\n axis: channel\n ```\n- in Python:\n >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId(\"channel\")))]",
"properties": {
"id": {
"const": "softmax",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/SoftmaxKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.SoftmaxDescr",
"type": "object"
},
"SoftmaxKwargs": {
"additionalProperties": false,
"description": "key word arguments for [SoftmaxDescr][]",
"properties": {
"axis": {
"default": "channel",
"description": "The axis to apply the softmax function along.\nNote:\n Defaults to 'channel' axis\n (which may not exist, in which case\n a different axis id has to be specified).",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"title": "model.v0_5.SoftmaxKwargs",
"type": "object"
},
"SpaceOutputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxis",
"type": "object"
},
"SpaceOutputAxisWithHalo": {
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxisWithHalo",
"type": "object"
},
"TimeOutputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxis",
"type": "object"
},
"TimeOutputAxisWithHalo": {
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxisWithHalo",
"type": "object"
},
"ZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract mean and divide by variance.\n\nExamples:\n Subtract tensor mean and variance\n - in YAML\n ```yaml\n preprocessing:\n - id: zero_mean_unit_variance\n ```\n - in Python\n >>> preprocessing = [ZeroMeanUnitVarianceDescr()]",
"properties": {
"id": {
"const": "zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ZeroMeanUnitVarianceDescr",
"type": "object"
},
"ZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ZeroMeanUnitVarianceDescr][]",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"id": {
"default": "output",
"description": "Output tensor id.\nNo duplicates are allowed across all inputs and outputs.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"description": {
"default": "",
"description": "free text description",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"axes": {
"description": "tensor axes",
"items": {
"discriminator": {
"mapping": {
"batch": "#/$defs/BatchAxis",
"channel": "#/$defs/ChannelAxis",
"index": "#/$defs/IndexOutputAxis",
"space": {
"oneOf": [
{
"$ref": "#/$defs/SpaceOutputAxis"
},
{
"$ref": "#/$defs/SpaceOutputAxisWithHalo"
}
]
},
"time": {
"oneOf": [
{
"$ref": "#/$defs/TimeOutputAxis"
},
{
"$ref": "#/$defs/TimeOutputAxisWithHalo"
}
]
}
},
"propertyName": "type"
},
"oneOf": [
{
"$ref": "#/$defs/BatchAxis"
},
{
"$ref": "#/$defs/ChannelAxis"
},
{
"$ref": "#/$defs/IndexOutputAxis"
},
{
"oneOf": [
{
"$ref": "#/$defs/TimeOutputAxis"
},
{
"$ref": "#/$defs/TimeOutputAxisWithHalo"
}
]
},
{
"oneOf": [
{
"$ref": "#/$defs/SpaceOutputAxis"
},
{
"$ref": "#/$defs/SpaceOutputAxisWithHalo"
}
]
}
]
},
"minItems": 1,
"title": "Axes",
"type": "array"
},
"test_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
},
"sample_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
},
"data": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
},
{
"items": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
}
]
},
"minItems": 1,
"type": "array"
}
],
"default": {
"type": "float32",
"range": [
null,
null
],
"unit": "arbitrary unit",
"scale": 1.0,
"offset": null
},
"description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
"title": "Data"
},
"postprocessing": {
"description": "Description of how this output should be postprocessed.\n\nnote: `postprocessing` always ends with an 'ensure_dtype' operation.\n If not given this is added to cast to this tensor's `data.type`.",
"items": {
"discriminator": {
"mapping": {
"binarize": "#/$defs/BinarizeDescr",
"clip": "#/$defs/ClipDescr",
"ensure_dtype": "#/$defs/EnsureDtypeDescr",
"fixed_zero_mean_unit_variance": "#/$defs/FixedZeroMeanUnitVarianceDescr",
"scale_linear": "#/$defs/ScaleLinearDescr",
"scale_mean_variance": "#/$defs/ScaleMeanVarianceDescr",
"scale_range": "#/$defs/ScaleRangeDescr",
"sigmoid": "#/$defs/SigmoidDescr",
"softmax": "#/$defs/SoftmaxDescr",
"zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
},
"propertyName": "id"
},
"oneOf": [
{
"$ref": "#/$defs/BinarizeDescr"
},
{
"$ref": "#/$defs/ClipDescr"
},
{
"$ref": "#/$defs/EnsureDtypeDescr"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceDescr"
},
{
"$ref": "#/$defs/ScaleLinearDescr"
},
{
"$ref": "#/$defs/ScaleMeanVarianceDescr"
},
{
"$ref": "#/$defs/ScaleRangeDescr"
},
{
"$ref": "#/$defs/SigmoidDescr"
},
{
"$ref": "#/$defs/SoftmaxDescr"
},
{
"$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
}
]
},
"title": "Postprocessing",
"type": "array"
}
},
"required": [
"axes"
],
"title": "model.v0_5.OutputTensorDescr",
"type": "object"
}
Fields:
-
description(Annotated[str, MaxLen(128)]) -
axes(NotEmpty[Sequence[IO_AxisT]]) -
test_tensor(FAIR[Optional[FileDescr_]]) -
sample_tensor(FAIR[Optional[FileDescr_]]) -
data(Union[TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]]) -
id(TensorId) -
postprocessing(List[PostprocessingDescr])
Validators:
-
_validate_axes→axes -
_validate_sample_tensor -
_check_data_type_across_channels→data -
_check_data_matches_channelaxis -
_validate_postprocessing_kwargs
data
pydantic-field
¤
data: Union[
TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]
]
Description of the tensor's data values, optionally per channel.
If specified per channel, the data type needs to match across channels.
dtype
property
¤
dtype: Literal[
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool",
]
dtype as specified under data.type or data[i].type
id
pydantic-field
¤
id: TensorId
Output tensor id. No duplicates are allowed across all inputs and outputs.
postprocessing
pydantic-field
¤
postprocessing: List[PostprocessingDescr]
Description of how this output should be postprocessed.
postprocessing always ends with an 'ensure_dtype' operation.
If not given this is added to cast to this tensor's data.type.
sample_tensor
pydantic-field
¤
sample_tensor: FAIR[Optional[FileDescr_]] = None
A sample tensor to illustrate a possible input/output for the model,
The sample image primarily serves to inform a human user about an example use case
and is typically stored as .hdf5, .png or .tiff.
It has to be readable by the imageio library
(numpy's .npy format is not supported).
The image dimensionality has to match the number of axes specified in this tensor description.
test_tensor
pydantic-field
¤
test_tensor: FAIR[Optional[FileDescr_]] = None
An example tensor to use for testing. Using the model with the test input tensors is expected to yield the test output tensors. Each test tensor has be a an ndarray in the numpy.lib file format. The file extension must be '.npy'.
get_axis_sizes_for_array
¤
get_axis_sizes_for_array(
array: NDArray[Any],
) -> Dict[AxisId, int]
Source code in src/bioimageio/spec/model/v0_5.py
1755 1756 1757 1758 1759 1760 1761 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
ParameterizedSize
pydantic-model
¤
Bases: Node
Describes a range of valid tensor axis sizes as size = min + n*step.
- min and step are given by the model description.
- All blocksize paramters n = 0,1,2,... yield a valid
size. - A greater blocksize paramter n = 0,1,2,... results in a greater size. This allows to adjust the axis size more generically.
Show JSON schema:
{
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
}
Fields:
get_n
¤
get_n(s: int) -> ParameterizedSize_N
return smallest n parameterizing a size greater or equal than s
Source code in src/bioimageio/spec/model/v0_5.py
327 328 329 | |
get_size
¤
get_size(n: ParameterizedSize_N) -> int
Source code in src/bioimageio/spec/model/v0_5.py
324 325 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
validate_size
¤
validate_size(size: int) -> int
Source code in src/bioimageio/spec/model/v0_5.py
313 314 315 316 317 318 319 320 321 322 | |
PytorchStateDictWeightsDescr
pydantic-model
¤
Bases: WeightsEntryDescrBase
Show JSON schema:
{
"$defs": {
"ArchitectureFromFileDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Architecture source file",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
}
},
"required": [
"source",
"callable"
],
"title": "model.v0_5.ArchitectureFromFileDescr",
"type": "object"
},
"ArchitectureFromLibraryDescr": {
"additionalProperties": false,
"properties": {
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
},
"import_from": {
"description": "Where to import the callable from, i.e. `from <import_from> import <callable>`",
"title": "Import From",
"type": "string"
}
},
"required": [
"callable",
"import_from"
],
"title": "model.v0_5.ArchitectureFromLibraryDescr",
"type": "object"
},
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
},
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"architecture": {
"anyOf": [
{
"$ref": "#/$defs/ArchitectureFromFileDescr"
},
{
"$ref": "#/$defs/ArchitectureFromLibraryDescr"
}
],
"title": "Architecture"
},
"pytorch_version": {
"$ref": "#/$defs/Version",
"description": "Version of the PyTorch library used.\nIf `architecture.depencencies` is specified it has to include pytorch and any version pinning has to be compatible."
},
"dependencies": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "environment.yaml"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Custom depencies beyond pytorch described in a Conda environment file.\nAllows to specify custom dependencies, see conda docs:\n- [Exporting an environment file across platforms](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#exporting-an-environment-file-across-platforms)\n- [Creating an environment file manually](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually)\n\nThe conda environment file should include pytorch and any version pinning has to be compatible with\n**pytorch_version**."
}
},
"required": [
"source",
"architecture",
"pytorch_version"
],
"title": "model.v0_5.PytorchStateDictWeightsDescr",
"type": "object"
}
Fields:
-
source(Annotated[FileSource, AfterValidator(wo_special_file_name)]) -
sha256(Optional[Sha256]) -
authors(Optional[List[Author]]) -
parent(Annotated[Optional[WeightsFormat], Field(examples=['pytorch_state_dict'])]) -
comment(str) -
architecture(Union[ArchitectureFromFileDescr, ArchitectureFromLibraryDescr]) -
pytorch_version(Version) -
dependencies(Optional[FileDescr_dependencies])
Validators:
-
_validate_sha256 -
_validate
architecture
pydantic-field
¤
architecture: Union[
ArchitectureFromFileDescr, ArchitectureFromLibraryDescr
]
authors
pydantic-field
¤
authors: Optional[List[Author]] = None
Authors
Either the person(s) that have trained this model resulting in the original weights file.
(If this is the initial weights entry, i.e. it does not have a parent)
Or the person(s) who have converted the weights to this weights format.
(If this is a child weight, i.e. it has a parent field)
comment
pydantic-field
¤
comment: str = ''
A comment about this weights entry, for example how these weights were created.
dependencies
pydantic-field
¤
dependencies: Optional[FileDescr_dependencies] = None
Custom depencies beyond pytorch described in a Conda environment file. Allows to specify custom dependencies, see conda docs: - Exporting an environment file across platforms - Creating an environment file manually
The conda environment file should include pytorch and any version pinning has to be compatible with pytorch_version.
parent
pydantic-field
¤
parent: Annotated[
Optional[WeightsFormat],
Field(examples=["pytorch_state_dict"]),
] = None
The source weights these weights were converted from.
For example, if a model's weights were converted from the pytorch_state_dict format to torchscript,
The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights.
All weight entries except one (the initial set of weights resulting from training the model),
need to have this field.
pytorch_version
pydantic-field
¤
pytorch_version: Version
Version of the PyTorch library used.
If architecture.depencencies is specified it has to include pytorch and any version pinning has to be compatible.
source
pydantic-field
¤
source: Annotated[
FileSource, AfterValidator(wo_special_file_name)
]
Source of the weights file.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
310 311 312 313 314 315 316 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
302 303 304 305 306 307 308 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | |
RelativeFilePath
¤
Bases: RelativePathBase[Union[AbsoluteFilePath, HttpUrl, ZipPath]]
flowchart TD
bioimageio.spec.model.v0_5.RelativeFilePath[RelativeFilePath]
bioimageio.spec._internal.io.RelativePathBase[RelativePathBase]
bioimageio.spec._internal.io.RelativePathBase --> bioimageio.spec.model.v0_5.RelativeFilePath
click bioimageio.spec.model.v0_5.RelativeFilePath href "" "bioimageio.spec.model.v0_5.RelativeFilePath"
click bioimageio.spec._internal.io.RelativePathBase href "" "bioimageio.spec._internal.io.RelativePathBase"
A path relative to the rdf.yaml file (also if the RDF source is a URL).
-
v0_2NotebookSource -
v0_3NotebookSource - API Reference
Methods:
| Name | Description |
|---|---|
__repr__ |
|
__str__ |
|
absolute |
get the absolute path/url |
format |
|
get_absolute |
|
model_post_init |
add validation @private |
Attributes:
| Name | Type | Description |
|---|---|---|
path |
PurePath
|
|
suffix |
|
__repr__
¤
__repr__() -> str
Source code in src/bioimageio/spec/_internal/io.py
148 149 | |
__str__
¤
__str__() -> str
Source code in src/bioimageio/spec/_internal/io.py
145 146 | |
absolute
¤
absolute() -> AbsolutePathT
get the absolute path/url
(resolved at time of initialization with the root of the ValidationContext)
Source code in src/bioimageio/spec/_internal/io.py
123 124 125 126 127 128 129 130 | |
format
¤
format() -> str
Source code in src/bioimageio/spec/_internal/io.py
151 152 153 | |
get_absolute
¤
get_absolute(
root: "RootHttpUrl | Path | AnyUrl | ZipFile",
) -> "AbsoluteFilePath | HttpUrl | ZipPath"
Source code in src/bioimageio/spec/_internal/io.py
215 216 217 218 219 220 221 222 223 224 225 226 227 | |
model_post_init
¤
model_post_init(__context: Any) -> None
add validation @private
Source code in src/bioimageio/spec/_internal/io.py
208 209 210 211 212 213 | |
ReproducibilityTolerance
pydantic-model
¤
Bases: Node
Describes what small numerical differences -- if any -- may be tolerated in the generated output when executing in different environments.
A tensor element output is considered mismatched to the test_tensor if abs(output - test_tensor) > absolute_tolerance + relative_tolerance * abs(test_tensor). (Internally we call numpy.testing.assert_allclose.)
Motivation
For testing we can request the respective deep learning frameworks to be as reproducible as possible by setting seeds and chosing deterministic algorithms, but differences in operating systems, available hardware and installed drivers may still lead to numerical differences.
Show JSON schema:
{
"additionalProperties": true,
"description": "Describes what small numerical differences -- if any -- may be tolerated\nin the generated output when executing in different environments.\n\nA tensor element *output* is considered mismatched to the **test_tensor** if\nabs(*output* - **test_tensor**) > **absolute_tolerance** + **relative_tolerance** * abs(**test_tensor**).\n(Internally we call [numpy.testing.assert_allclose](https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html).)\n\nMotivation:\n For testing we can request the respective deep learning frameworks to be as\n reproducible as possible by setting seeds and chosing deterministic algorithms,\n but differences in operating systems, available hardware and installed drivers\n may still lead to numerical differences.",
"properties": {
"relative_tolerance": {
"default": 0.001,
"description": "Maximum relative tolerance of reproduced test tensor.",
"maximum": 0.01,
"minimum": 0,
"title": "Relative Tolerance",
"type": "number"
},
"absolute_tolerance": {
"default": 0.001,
"description": "Maximum absolute tolerance of reproduced test tensor.",
"minimum": 0,
"title": "Absolute Tolerance",
"type": "number"
},
"mismatched_elements_per_million": {
"default": 100,
"description": "Maximum number of mismatched elements/pixels per million to tolerate.",
"maximum": 1000,
"minimum": 0,
"title": "Mismatched Elements Per Million",
"type": "integer"
},
"output_ids": {
"default": [],
"description": "Limits the output tensor IDs these reproducibility details apply to.",
"items": {
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"title": "Output Ids",
"type": "array"
},
"weights_formats": {
"default": [],
"description": "Limits the weights formats these details apply to.",
"items": {
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
"title": "Weights Formats",
"type": "array"
}
},
"title": "model.v0_5.ReproducibilityTolerance",
"type": "object"
}
Fields:
-
relative_tolerance(RelativeTolerance) -
absolute_tolerance(AbsoluteTolerance) -
mismatched_elements_per_million(MismatchedElementsPerMillion) -
output_ids(Sequence[TensorId]) -
weights_formats(Sequence[WeightsFormat])
absolute_tolerance
pydantic-field
¤
absolute_tolerance: AbsoluteTolerance = 0.001
Maximum absolute tolerance of reproduced test tensor.
mismatched_elements_per_million
pydantic-field
¤
mismatched_elements_per_million: MismatchedElementsPerMillion = 100
Maximum number of mismatched elements/pixels per million to tolerate.
output_ids
pydantic-field
¤
output_ids: Sequence[TensorId] = ()
Limits the output tensor IDs these reproducibility details apply to.
relative_tolerance
pydantic-field
¤
relative_tolerance: RelativeTolerance = 0.001
Maximum relative tolerance of reproduced test tensor.
weights_formats
pydantic-field
¤
weights_formats: Sequence[WeightsFormat] = ()
Limits the weights formats these details apply to.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
ResourceId
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.ResourceId[ResourceId]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.ResourceId
click bioimageio.spec.model.v0_5.ResourceId href "" "bioimageio.spec.model.v0_5.ResourceId"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
Methods:
| Name | Description |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
Attributes:
| Name | Type | Description |
|---|---|---|
root_model |
Type[RootModel[Any]]
|
the pydantic root model to validate the string |
root_model
class-attribute
¤
root_model: Type[RootModel[Any]] = RootModel[
Annotated[
NotEmpty[str],
RestrictCharacters(
string.ascii_lowercase + string.digits + "_-/."
),
annotated_types.Predicate(
lambda s: (
not (s.startswith("/") or s.endswith("/"))
)
),
]
]
the pydantic root model to validate the string
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
RunMode
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"name": {
"anyOf": [
{
"const": "deepimagej",
"type": "string"
},
{
"type": "string"
}
],
"description": "Run mode name",
"title": "Name"
},
"kwargs": {
"additionalProperties": true,
"description": "Run mode specific key word arguments",
"title": "Kwargs",
"type": "object"
}
},
"required": [
"name"
],
"title": "model.v0_4.RunMode",
"type": "object"
}
Fields:
-
name(Annotated[Union[KnownRunMode, str], warn(KnownRunMode, "Unknown run mode '{value}'.")]) -
kwargs(Dict[str, Any])
name
pydantic-field
¤
name: Annotated[
Union[KnownRunMode, str],
warn(KnownRunMode, "Unknown run mode '{value}'."),
]
Run mode name
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
ScaleLinearAlongAxisKwargs
pydantic-model
¤
Bases: KwargsNode
Key word arguments for ScaleLinearDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "Key word arguments for [ScaleLinearDescr][]",
"properties": {
"axis": {
"description": "The axis of gain and offset values.",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"gain": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 0.0,
"description": "additive term",
"title": "Offset"
}
},
"required": [
"axis"
],
"title": "model.v0_5.ScaleLinearAlongAxisKwargs",
"type": "object"
}
Fields:
-
axis(Annotated[NonBatchAxisId, Field(examples=['channel'])]) -
gain(Union[float, NotEmpty[List[float]]]) -
offset(Union[float, NotEmpty[List[float]]])
Validators:
-
_validate
axis
pydantic-field
¤
axis: Annotated[NonBatchAxisId, Field(examples=["channel"])]
The axis of gain and offset values.
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
426 427 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
420 421 422 423 424 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
417 418 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
ScaleLinearDescr
pydantic-model
¤
Bases: NodeWithExplicitlySetFields
Fixed linear scaling.
Examples:
- Scale with scalar gain and offset
- in YAML
preprocessing: - id: scale_linear kwargs: gain: 2.0 offset: 3.0 -
in Python:
preprocessing = [ ... ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0)) ... ]
-
Independent scaling along an axis
- in YAML
preprocessing: - id: scale_linear kwargs: axis: 'channel' gain: [1.0, 2.0, 3.0] - in Python:
preprocessing = [ ... ScaleLinearDescr( ... kwargs=ScaleLinearAlongAxisKwargs( ... axis=AxisId("channel"), ... gain=[1.0, 2.0, 3.0], ... ) ... ) ... ]
Show JSON schema:
{
"$defs": {
"ScaleLinearAlongAxisKwargs": {
"additionalProperties": false,
"description": "Key word arguments for [ScaleLinearDescr][]",
"properties": {
"axis": {
"description": "The axis of gain and offset values.",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"gain": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 0.0,
"description": "additive term",
"title": "Offset"
}
},
"required": [
"axis"
],
"title": "model.v0_5.ScaleLinearAlongAxisKwargs",
"type": "object"
},
"ScaleLinearKwargs": {
"additionalProperties": false,
"description": "Key word arguments for [ScaleLinearDescr][]",
"properties": {
"gain": {
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain",
"type": "number"
},
"offset": {
"default": 0.0,
"description": "additive term",
"title": "Offset",
"type": "number"
}
},
"title": "model.v0_5.ScaleLinearKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Fixed linear scaling.\n\nExamples:\n 1. Scale with scalar gain and offset\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n gain: 2.0\n offset: 3.0\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0))\n ... ]\n\n 2. Independent scaling along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n axis: 'channel'\n gain: [1.0, 2.0, 3.0]\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(\n ... kwargs=ScaleLinearAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... gain=[1.0, 2.0, 3.0],\n ... )\n ... )\n ... ]",
"properties": {
"id": {
"const": "scale_linear",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/ScaleLinearKwargs"
},
{
"$ref": "#/$defs/ScaleLinearAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ScaleLinearDescr",
"type": "object"
}
Fields:
-
id(Literal['scale_linear']) -
kwargs(Union[ScaleLinearKwargs, ScaleLinearAlongAxisKwargs])
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
ScaleLinearKwargs
pydantic-model
¤
Bases: KwargsNode
Key word arguments for ScaleLinearDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "Key word arguments for [ScaleLinearDescr][]",
"properties": {
"gain": {
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain",
"type": "number"
},
"offset": {
"default": 0.0,
"description": "additive term",
"title": "Offset",
"type": "number"
}
},
"title": "model.v0_5.ScaleLinearKwargs",
"type": "object"
}
Fields:
Validators:
-
_validate
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
426 427 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
420 421 422 423 424 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
417 418 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
ScaleMeanVarianceDescr
pydantic-model
¤
Bases: NodeWithExplicitlySetFields
Scale a tensor's data distribution to match another tensor's mean/std.
out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.
Show JSON schema:
{
"$defs": {
"ScaleMeanVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ScaleMeanVarianceKwargs][]",
"properties": {
"reference_tensor": {
"description": "Name of tensor to match.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability:\n`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"reference_tensor"
],
"title": "model.v0_5.ScaleMeanVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Scale a tensor's data distribution to match another tensor's mean/std.\n`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
"properties": {
"id": {
"const": "scale_mean_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleMeanVarianceKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ScaleMeanVarianceDescr",
"type": "object"
}
Fields:
-
id(Literal['scale_mean_variance']) -
kwargs(ScaleMeanVarianceKwargs)
implemented_id
class-attribute
¤
implemented_id: Literal["scale_mean_variance"] = (
"scale_mean_variance"
)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
ScaleMeanVarianceKwargs
pydantic-model
¤
Bases: KwargsNode
key word arguments for ScaleMeanVarianceKwargs
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for [ScaleMeanVarianceKwargs][]",
"properties": {
"reference_tensor": {
"description": "Name of tensor to match.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability:\n`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"reference_tensor"
],
"title": "model.v0_5.ScaleMeanVarianceKwargs",
"type": "object"
}
Fields:
-
reference_tensor(TensorId) -
axes(Annotated[Optional[Sequence[AxisId]], Field(examples=[('batch', 'x', 'y')])]) -
eps(Annotated[float, Interval(gt=0, le=0.1)])
axes
pydantic-field
¤
The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.
For example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')
resulting in a tensor of equal shape normalized per channel, specify axes=('batch', 'x', 'y').
To normalize samples independently, leave out the 'batch' axis.
Default: Scale all axes jointly.
eps
pydantic-field
¤
eps: Annotated[float, Interval(gt=0, le=0.1)] = 1e-06
Epsilon for numeric stability:
out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
426 427 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
420 421 422 423 424 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
417 418 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
ScaleRangeDescr
pydantic-model
¤
Bases: NodeWithExplicitlySetFields
Scale with percentiles.
Examples:
-
Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0
- in YAML
preprocessing: - id: scale_range kwargs: axes: ['y', 'x'] max_percentile: 99.8 min_percentile: 5.0 - in Python
preprocessing = [ ... ScaleRangeDescr( ... kwargs=ScaleRangeKwargs( ... axes= (AxisId('y'), AxisId('x')), ... max_percentile= 99.8, ... min_percentile= 5.0, ... ) ... ) ... ]
- in YAML
-
Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.
- in YAML
preprocessing: - id: scale_range kwargs: axes: ['y', 'x'] max_percentile: 99.8 min_percentile: 5.0 - id: scale_range - id: clip kwargs: min: 0.0 max: 1.0 - in Python
preprocessing = [ ... ScaleRangeDescr( ... kwargs=ScaleRangeKwargs( ... axes= (AxisId('y'), AxisId('x')), ... max_percentile= 99.8, ... min_percentile= 5.0, ... ) ... ), ... ClipDescr( ... kwargs=ClipKwargs( ... min=0.0, ... max=1.0, ... ) ... ), ... ]
- in YAML
Show JSON schema:
{
"$defs": {
"ScaleRangeKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ScaleRangeDescr][]\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"min_percentile": {
"default": 0.0,
"description": "The lower percentile used to determine the value to align with zero.",
"exclusiveMaximum": 100,
"minimum": 0,
"title": "Min Percentile",
"type": "number"
},
"max_percentile": {
"default": 100.0,
"description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
"exclusiveMinimum": 1,
"maximum": 100,
"title": "Max Percentile",
"type": "number"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
},
"reference_tensor": {
"anyOf": [
{
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Tensor ID to compute the percentiles from. Default: The tensor itself.\nFor any tensor in `inputs` only input tensor references are allowed.",
"title": "Reference Tensor"
}
},
"title": "model.v0_5.ScaleRangeKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Scale with percentiles.\n\nExamples:\n1. Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... )\n ... ]\n\n 2. Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: scale_range\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(\n ... kwargs=ClipKwargs(\n ... min=0.0,\n ... max=1.0,\n ... )\n ... ),\n ... ]",
"properties": {
"id": {
"const": "scale_range",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleRangeKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ScaleRangeDescr",
"type": "object"
}
Fields:
-
id(Literal['scale_range']) -
kwargs(ScaleRangeKwargs)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
ScaleRangeKwargs
pydantic-model
¤
Bases: KwargsNode
key word arguments for ScaleRangeDescr
For min_percentile=0.0 (the default) and max_percentile=100 (the default)
this processing step normalizes data to the [0, 1] intervall.
For other percentiles the normalized values will partially be outside the [0, 1]
intervall. Use ScaleRange followed by ClipDescr if you want to limit the
normalized values to a range.
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for [ScaleRangeDescr][]\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"min_percentile": {
"default": 0.0,
"description": "The lower percentile used to determine the value to align with zero.",
"exclusiveMaximum": 100,
"minimum": 0,
"title": "Min Percentile",
"type": "number"
},
"max_percentile": {
"default": 100.0,
"description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
"exclusiveMinimum": 1,
"maximum": 100,
"title": "Max Percentile",
"type": "number"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
},
"reference_tensor": {
"anyOf": [
{
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Tensor ID to compute the percentiles from. Default: The tensor itself.\nFor any tensor in `inputs` only input tensor references are allowed.",
"title": "Reference Tensor"
}
},
"title": "model.v0_5.ScaleRangeKwargs",
"type": "object"
}
Fields:
-
axes(Annotated[Optional[Sequence[AxisId]], Field(examples=[('batch', 'x', 'y')])]) -
min_percentile(Annotated[float, Interval(ge=0, lt=100)]) -
max_percentile(Annotated[float, Interval(gt=1, le=100)]) -
eps(Annotated[float, Interval(gt=0, le=0.1)]) -
reference_tensor(Optional[TensorId])
Validators:
axes
pydantic-field
¤
The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.
For example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')
resulting in a tensor of equal shape normalized per channel, specify axes=('batch', 'x', 'y').
To normalize samples independently, leave out the "batch" axis.
Default: Scale all axes jointly.
eps
pydantic-field
¤
eps: Annotated[float, Interval(gt=0, le=0.1)] = 1e-06
Epsilon for numeric stability.
out = (tensor - v_lower) / (v_upper - v_lower + eps);
with v_lower,v_upper values at the respective percentiles.
max_percentile
pydantic-field
¤
max_percentile: Annotated[float, Interval(gt=1, le=100)] = (
100.0
)
The upper percentile used to determine the value to align with one.
Has to be bigger than min_percentile.
The range is 1 to 100 instead of 0 to 100 to avoid mistakenly
accepting percentiles specified in the range 0.0 to 1.0.
min_percentile
pydantic-field
¤
min_percentile: Annotated[float, Interval(ge=0, lt=100)] = (
0.0
)
The lower percentile used to determine the value to align with zero.
reference_tensor
pydantic-field
¤
reference_tensor: Optional[TensorId] = None
Tensor ID to compute the percentiles from. Default: The tensor itself.
For any tensor in inputs only input tensor references are allowed.
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
426 427 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
420 421 422 423 424 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
417 418 | |
min_smaller_max
pydantic-validator
¤
min_smaller_max(
value: float, info: ValidationInfo
) -> float
Source code in src/bioimageio/spec/model/v0_5.py
1460 1461 1462 1463 1464 1465 1466 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
Sha256
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.Sha256[Sha256]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.Sha256
click bioimageio.spec.model.v0_5.Sha256 href "" "bioimageio.spec.model.v0_5.Sha256"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
A SHA-256 hash value
-
API Reference
utilsget_sha256
Methods:
| Name | Description |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
Attributes:
| Name | Type | Description |
|---|---|---|
root_model |
Type[RootModel[Any]]
|
the pydantic root model to validate the string |
root_model
class-attribute
¤
root_model: Type[RootModel[Any]] = RootModel[
Annotated[
str,
StringConstraints(
strip_whitespace=True,
to_lower=True,
min_length=64,
max_length=64,
),
]
]
the pydantic root model to validate the string
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
SiUnit
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.SiUnit[SiUnit]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.SiUnit
click bioimageio.spec.model.v0_5.SiUnit href "" "bioimageio.spec.model.v0_5.SiUnit"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
An SI unit
Methods:
| Name | Description |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
Attributes:
| Name | Type | Description |
|---|---|---|
root_model |
Type[RootModel[Any]]
|
the pydantic root model to validate the string |
root_model
class-attribute
¤
root_model: Type[RootModel[Any]] = RootModel[
Annotated[
str,
StringConstraints(
min_length=1, pattern=SI_UNIT_REGEX
),
BeforeValidator(_normalize_multiplication),
]
]
the pydantic root model to validate the string
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
SigmoidDescr
pydantic-model
¤
Bases: NodeWithExplicitlySetFields
The logistic sigmoid function, a.k.a. expit function.
Examples:
- in YAML
postprocessing: - id: sigmoid - in Python: >>> postprocessing = [SigmoidDescr()]
Show JSON schema:
{
"additionalProperties": false,
"description": "The logistic sigmoid function, a.k.a. expit function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: sigmoid\n ```\n- in Python:\n >>> postprocessing = [SigmoidDescr()]",
"properties": {
"id": {
"const": "sigmoid",
"title": "Id",
"type": "string"
}
},
"required": [
"id"
],
"title": "model.v0_5.SigmoidDescr",
"type": "object"
}
Fields:
-
id(Literal['sigmoid'])
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
SizeReference
pydantic-model
¤
Bases: Node
A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.
axis.size = reference.size * reference.scale / axis.scale + offset
Note:
1. The axis and the referenced axis need to have the same unit (or no unit).
2. Batch axes may not be referenced.
3. Fractions are rounded down.
4. If the reference axis is concatenable the referencing axis is assumed to be
concatenable as well with the same block order.
Example:
An unisotropic input image of wh=10049 pixels depicts a phsical space of 200196mm².
Let's assume that we want to express the image height h in relation to its width w
instead of only accepting input images of exactly 10049 pixels
(for example to express a range of valid image shapes by parametrizing w, see ParameterizedSize).
>>> w = SpaceInputAxis(id=AxisId("w"), size=100, unit="millimeter", scale=2)
>>> h = SpaceInputAxis(
... id=AxisId("h"),
... size=SizeReference(tensor_id=TensorId("input"), axis_id=AxisId("w"), offset=-1),
... unit="millimeter",
... scale=4,
... )
>>> print(h.size.get_size(h, w))
49
⇒ h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49
Show JSON schema:
{
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
Fields:
get_size
¤
get_size(
axis: Union[
ChannelAxis,
IndexInputAxis,
IndexOutputAxis,
TimeInputAxis,
SpaceInputAxis,
TimeOutputAxis,
TimeOutputAxisWithHalo,
SpaceOutputAxis,
SpaceOutputAxisWithHalo,
],
ref_axis: Union[
ChannelAxis,
IndexInputAxis,
IndexOutputAxis,
TimeInputAxis,
SpaceInputAxis,
TimeOutputAxis,
TimeOutputAxisWithHalo,
SpaceOutputAxis,
SpaceOutputAxisWithHalo,
],
n: ParameterizedSize_N = 0,
ref_size: Optional[int] = None,
)
Compute the concrete size for a given axis and its reference axis.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[ChannelAxis, IndexInputAxis, IndexOutputAxis, TimeInputAxis, SpaceInputAxis, TimeOutputAxis, TimeOutputAxisWithHalo, SpaceOutputAxis, SpaceOutputAxisWithHalo]
|
The axis this SizeReference is the size of. |
required |
|
Union[ChannelAxis, IndexInputAxis, IndexOutputAxis, TimeInputAxis, SpaceInputAxis, TimeOutputAxis, TimeOutputAxisWithHalo, SpaceOutputAxis, SpaceOutputAxisWithHalo]
|
The reference axis to compute the size from. |
required |
|
ParameterizedSize_N
|
If the ref_axis is parameterized (of type |
0
|
|
Optional[int]
|
Overwrite the reference size instead of deriving it from ref_axis (ref_axis.scale is still used; any given n is ignored). |
None
|
Source code in src/bioimageio/spec/model/v0_5.py
392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
SoftmaxDescr
pydantic-model
¤
Bases: NodeWithExplicitlySetFields
The softmax function.
Examples:
- in YAML
postprocessing: - id: softmax kwargs: axis: channel - in Python: >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId("channel")))]
Show JSON schema:
{
"$defs": {
"SoftmaxKwargs": {
"additionalProperties": false,
"description": "key word arguments for [SoftmaxDescr][]",
"properties": {
"axis": {
"default": "channel",
"description": "The axis to apply the softmax function along.\nNote:\n Defaults to 'channel' axis\n (which may not exist, in which case\n a different axis id has to be specified).",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"title": "model.v0_5.SoftmaxKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "The softmax function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: softmax\n kwargs:\n axis: channel\n ```\n- in Python:\n >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId(\"channel\")))]",
"properties": {
"id": {
"const": "softmax",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/SoftmaxKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.SoftmaxDescr",
"type": "object"
}
Fields:
-
id(Literal['softmax']) -
kwargs(SoftmaxKwargs)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
SoftmaxKwargs
pydantic-model
¤
Bases: KwargsNode
key word arguments for SoftmaxDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for [SoftmaxDescr][]",
"properties": {
"axis": {
"default": "channel",
"description": "The axis to apply the softmax function along.\nNote:\n Defaults to 'channel' axis\n (which may not exist, in which case\n a different axis id has to be specified).",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"title": "model.v0_5.SoftmaxKwargs",
"type": "object"
}
Fields:
-
axis(Annotated[NonBatchAxisId, Field(examples=['channel'])])
axis
pydantic-field
¤
axis: Annotated[NonBatchAxisId, Field(examples=["channel"])]
The axis to apply the softmax function along. Note: Defaults to 'channel' axis (which may not exist, in which case a different axis id has to be specified).
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
426 427 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
420 421 422 423 424 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
417 418 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
SpaceAxisBase
pydantic-model
¤
Bases: AxisBase
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"type"
],
"title": "model.v0_5.SpaceAxisBase",
"type": "object"
}
Fields:
-
description(Annotated[str, MaxLen(128)]) -
type(Literal['space']) -
id(Annotated[NonBatchAxisId, Field(examples=['x', 'y', 'z'])]) -
unit(Optional[SpaceUnit]) -
scale(Annotated[float, Gt(0)])
description
pydantic-field
¤
description: Annotated[str, MaxLen(128)] = ''
A short description of this axis beyond its type and id.
id
pydantic-field
¤
id: Annotated[
NonBatchAxisId, Field(examples=["x", "y", "z"])
]
An axis id unique across all axes of one tensor.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
SpaceInputAxis
pydantic-model
¤
Bases: SpaceAxisBase, _WithInputAxisSize
Show JSON schema:
{
"$defs": {
"ParameterizedSize": {
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceInputAxis",
"type": "object"
}
Fields:
-
size(Annotated[Union[Annotated[int, Gt(0)], ParameterizedSize, SizeReference], Field(examples=[10, ParameterizedSize(min=32, step=16).model_dump(mode='json'), SizeReference(tensor_id=TensorId('t'), axis_id=AxisId('a'), offset=5).model_dump(mode='json')])]) -
id(Annotated[NonBatchAxisId, Field(examples=['x', 'y', 'z'])]) -
description(Annotated[str, MaxLen(128)]) -
type(Literal['space']) -
unit(Optional[SpaceUnit]) -
scale(Annotated[float, Gt(0)]) -
concatenable(bool)
concatenable
pydantic-field
¤
concatenable: bool = False
If a model has a concatenable input axis, it can be processed blockwise,
splitting a longer sample axis into blocks matching its input tensor description.
Output axes are concatenable if they have a SizeReference to a concatenable
input axis.
description
pydantic-field
¤
description: Annotated[str, MaxLen(128)] = ''
A short description of this axis beyond its type and id.
id
pydantic-field
¤
id: Annotated[
NonBatchAxisId, Field(examples=["x", "y", "z"])
]
An axis id unique across all axes of one tensor.
size
pydantic-field
¤
size: Annotated[
Union[
Annotated[int, Gt(0)],
ParameterizedSize,
SizeReference,
],
Field(
examples=[
10,
ParameterizedSize(min=32, step=16).model_dump(
mode="json"
),
SizeReference(
tensor_id=TensorId("t"),
axis_id=AxisId("a"),
offset=5,
).model_dump(mode="json"),
]
),
]
The size/length of this axis can be specified as - fixed integer - parameterized series of valid sizes (ParameterizedSize) - reference to another axis with an optional offset (SizeReference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
SpaceOutputAxis
pydantic-model
¤
Bases: SpaceAxisBase, _WithOutputAxisSize
Show JSON schema:
{
"$defs": {
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxis",
"type": "object"
}
Fields:
-
size(Annotated[Union[Annotated[int, Gt(0)], SizeReference], Field(examples=[10, SizeReference(tensor_id=TensorId('t'), axis_id=AxisId('a'), offset=5).model_dump(mode='json')])]) -
id(Annotated[NonBatchAxisId, Field(examples=['x', 'y', 'z'])]) -
description(Annotated[str, MaxLen(128)]) -
type(Literal['space']) -
unit(Optional[SpaceUnit]) -
scale(Annotated[float, Gt(0)])
description
pydantic-field
¤
description: Annotated[str, MaxLen(128)] = ''
A short description of this axis beyond its type and id.
id
pydantic-field
¤
id: Annotated[
NonBatchAxisId, Field(examples=["x", "y", "z"])
]
An axis id unique across all axes of one tensor.
size
pydantic-field
¤
size: Annotated[
Union[Annotated[int, Gt(0)], SizeReference],
Field(
examples=[
10,
SizeReference(
tensor_id=TensorId("t"),
axis_id=AxisId("a"),
offset=5,
).model_dump(mode="json"),
]
),
]
The size/length of this axis can be specified as - fixed integer - reference to another axis with an optional offset (see SizeReference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
SpaceOutputAxisWithHalo
pydantic-model
¤
Bases: SpaceAxisBase, WithHalo
Show JSON schema:
{
"$defs": {
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxisWithHalo",
"type": "object"
}
Fields:
-
halo(Annotated[int, Ge(1)]) -
size(Annotated[SizeReference, Field(examples=[10, SizeReference(tensor_id=TensorId('t'), axis_id=AxisId('a'), offset=5).model_dump(mode='json')])]) -
id(Annotated[NonBatchAxisId, Field(examples=['x', 'y', 'z'])]) -
description(Annotated[str, MaxLen(128)]) -
type(Literal['space']) -
unit(Optional[SpaceUnit]) -
scale(Annotated[float, Gt(0)])
description
pydantic-field
¤
description: Annotated[str, MaxLen(128)] = ''
A short description of this axis beyond its type and id.
halo
pydantic-field
¤
halo: Annotated[int, Ge(1)]
The halo should be cropped from the output tensor to avoid boundary effects.
It is to be cropped from both sides, i.e. size_after_crop = size - 2 * halo.
To document a halo that is already cropped by the model use size.offset instead.
id
pydantic-field
¤
id: Annotated[
NonBatchAxisId, Field(examples=["x", "y", "z"])
]
An axis id unique across all axes of one tensor.
size
pydantic-field
¤
size: Annotated[
SizeReference,
Field(
examples=[
10,
SizeReference(
tensor_id=TensorId("t"),
axis_id=AxisId("a"),
offset=5,
).model_dump(mode="json"),
]
),
]
reference to another axis with an optional offset (see SizeReference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
TensorDescrBase
pydantic-model
¤
Bases: Node, Generic[IO_AxisT]
Show JSON schema:
{
"$defs": {
"BatchAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "batch",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "batch",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"const": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
"title": "Size"
}
},
"required": [
"type"
],
"title": "model.v0_5.BatchAxis",
"type": "object"
},
"ChannelAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "channel",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "channel",
"title": "Type",
"type": "string"
},
"channel_names": {
"items": {
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"minItems": 1,
"title": "Channel Names",
"type": "array"
}
},
"required": [
"type",
"channel_names"
],
"title": "model.v0_5.ChannelAxis",
"type": "object"
},
"DataDependentSize": {
"additionalProperties": false,
"properties": {
"min": {
"default": 1,
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"max": {
"anyOf": [
{
"exclusiveMinimum": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Max"
}
},
"title": "model.v0_5.DataDependentSize",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"IndexInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.IndexInputAxis",
"type": "object"
},
"IndexOutputAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
},
{
"$ref": "#/$defs/DataDependentSize"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset ([SizeReference][])\n- data dependent size using [DataDependentSize][] (size is only known after model inference)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
}
},
"required": [
"type",
"size"
],
"title": "model.v0_5.IndexOutputAxis",
"type": "object"
},
"IntervalOrRatioDataDescr": {
"additionalProperties": false,
"properties": {
"type": {
"default": "float32",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64"
],
"examples": [
"float32",
"float64",
"uint8",
"uint16"
],
"title": "Type",
"type": "string"
},
"range": {
"default": [
null,
null
],
"description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
"maxItems": 2,
"minItems": 2,
"prefixItems": [
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
},
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
}
],
"title": "Range",
"type": "array"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
}
],
"default": "arbitrary unit",
"title": "Unit"
},
"scale": {
"default": 1.0,
"description": "Scale for data on an interval (or ratio) scale.",
"title": "Scale",
"type": "number"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Offset for data on a ratio scale.",
"title": "Offset"
}
},
"title": "model.v0_5.IntervalOrRatioDataDescr",
"type": "object"
},
"NominalOrOrdinalDataDescr": {
"additionalProperties": false,
"properties": {
"values": {
"anyOf": [
{
"items": {
"type": "integer"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "boolean"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "string"
},
"minItems": 1,
"type": "array"
}
],
"description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
"title": "Values"
},
"type": {
"default": "uint8",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"examples": [
"float32",
"uint8",
"uint16",
"int64",
"bool"
],
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
}
},
"required": [
"values"
],
"title": "model.v0_5.NominalOrOrdinalDataDescr",
"type": "object"
},
"ParameterizedSize": {
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
},
"SpaceInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceInputAxis",
"type": "object"
},
"SpaceOutputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxis",
"type": "object"
},
"SpaceOutputAxisWithHalo": {
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxisWithHalo",
"type": "object"
},
"TimeInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeInputAxis",
"type": "object"
},
"TimeOutputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxis",
"type": "object"
},
"TimeOutputAxisWithHalo": {
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxisWithHalo",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"id": {
"description": "Tensor id. No duplicates are allowed.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"description": {
"default": "",
"description": "free text description",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"axes": {
"description": "tensor axes",
"items": {
"anyOf": [
{
"discriminator": {
"mapping": {
"batch": "#/$defs/BatchAxis",
"channel": "#/$defs/ChannelAxis",
"index": "#/$defs/IndexInputAxis",
"space": "#/$defs/SpaceInputAxis",
"time": "#/$defs/TimeInputAxis"
},
"propertyName": "type"
},
"oneOf": [
{
"$ref": "#/$defs/BatchAxis"
},
{
"$ref": "#/$defs/ChannelAxis"
},
{
"$ref": "#/$defs/IndexInputAxis"
},
{
"$ref": "#/$defs/TimeInputAxis"
},
{
"$ref": "#/$defs/SpaceInputAxis"
}
]
},
{
"discriminator": {
"mapping": {
"batch": "#/$defs/BatchAxis",
"channel": "#/$defs/ChannelAxis",
"index": "#/$defs/IndexOutputAxis",
"space": {
"oneOf": [
{
"$ref": "#/$defs/SpaceOutputAxis"
},
{
"$ref": "#/$defs/SpaceOutputAxisWithHalo"
}
]
},
"time": {
"oneOf": [
{
"$ref": "#/$defs/TimeOutputAxis"
},
{
"$ref": "#/$defs/TimeOutputAxisWithHalo"
}
]
}
},
"propertyName": "type"
},
"oneOf": [
{
"$ref": "#/$defs/BatchAxis"
},
{
"$ref": "#/$defs/ChannelAxis"
},
{
"$ref": "#/$defs/IndexOutputAxis"
},
{
"oneOf": [
{
"$ref": "#/$defs/TimeOutputAxis"
},
{
"$ref": "#/$defs/TimeOutputAxisWithHalo"
}
]
},
{
"oneOf": [
{
"$ref": "#/$defs/SpaceOutputAxis"
},
{
"$ref": "#/$defs/SpaceOutputAxisWithHalo"
}
]
}
]
}
]
},
"minItems": 1,
"title": "Axes",
"type": "array"
},
"test_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
},
"sample_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
},
"data": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
},
{
"items": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
}
]
},
"minItems": 1,
"type": "array"
}
],
"default": {
"type": "float32",
"range": [
null,
null
],
"unit": "arbitrary unit",
"scale": 1.0,
"offset": null
},
"description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
"title": "Data"
}
},
"required": [
"id",
"axes"
],
"title": "model.v0_5.TensorDescrBase",
"type": "object"
}
Fields:
-
id(TensorId) -
description(Annotated[str, MaxLen(128)]) -
axes(NotEmpty[Sequence[IO_AxisT]]) -
test_tensor(FAIR[Optional[FileDescr_]]) -
sample_tensor(FAIR[Optional[FileDescr_]]) -
data(Union[TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]])
Validators:
-
_validate_axes→axes -
_validate_sample_tensor -
_check_data_type_across_channels→data -
_check_data_matches_channelaxis
data
pydantic-field
¤
data: Union[
TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]
]
Description of the tensor's data values, optionally per channel.
If specified per channel, the data type needs to match across channels.
dtype
property
¤
dtype: Literal[
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool",
]
dtype as specified under data.type or data[i].type
sample_tensor
pydantic-field
¤
sample_tensor: FAIR[Optional[FileDescr_]] = None
A sample tensor to illustrate a possible input/output for the model,
The sample image primarily serves to inform a human user about an example use case
and is typically stored as .hdf5, .png or .tiff.
It has to be readable by the imageio library
(numpy's .npy format is not supported).
The image dimensionality has to match the number of axes specified in this tensor description.
test_tensor
pydantic-field
¤
test_tensor: FAIR[Optional[FileDescr_]] = None
An example tensor to use for testing. Using the model with the test input tensors is expected to yield the test output tensors. Each test tensor has be a an ndarray in the numpy.lib file format. The file extension must be '.npy'.
get_axis_sizes_for_array
¤
get_axis_sizes_for_array(
array: NDArray[Any],
) -> Dict[AxisId, int]
Source code in src/bioimageio/spec/model/v0_5.py
1755 1756 1757 1758 1759 1760 1761 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
TensorId
¤
Bases: LowerCaseIdentifier
flowchart TD
bioimageio.spec.model.v0_5.TensorId[TensorId]
bioimageio.spec._internal.types.LowerCaseIdentifier[LowerCaseIdentifier]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.types.LowerCaseIdentifier --> bioimageio.spec.model.v0_5.TensorId
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.types.LowerCaseIdentifier
click bioimageio.spec.model.v0_5.TensorId href "" "bioimageio.spec.model.v0_5.TensorId"
click bioimageio.spec._internal.types.LowerCaseIdentifier href "" "bioimageio.spec._internal.types.LowerCaseIdentifier"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
Methods:
| Name | Description |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
Attributes:
| Name | Type | Description |
|---|---|---|
root_model |
Type[RootModel[Any]]
|
the pydantic root model to validate the string |
root_model
class-attribute
¤
the pydantic root model to validate the string
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
TensorflowJsWeightsDescr
pydantic-model
¤
Bases: WeightsEntryDescrBase
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "Version of the TensorFlow library used."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.TensorflowJsWeightsDescr",
"type": "object"
}
Fields:
-
sha256(Optional[Sha256]) -
authors(Optional[List[Author]]) -
parent(Annotated[Optional[WeightsFormat], Field(examples=['pytorch_state_dict'])]) -
comment(str) -
tensorflow_version(Version) -
source(Annotated[FileSource, AfterValidator(wo_special_file_name)])
Validators:
-
_validate_sha256 -
_validate
authors
pydantic-field
¤
authors: Optional[List[Author]] = None
Authors
Either the person(s) that have trained this model resulting in the original weights file.
(If this is the initial weights entry, i.e. it does not have a parent)
Or the person(s) who have converted the weights to this weights format.
(If this is a child weight, i.e. it has a parent field)
comment
pydantic-field
¤
comment: str = ''
A comment about this weights entry, for example how these weights were created.
parent
pydantic-field
¤
parent: Annotated[
Optional[WeightsFormat],
Field(examples=["pytorch_state_dict"]),
] = None
The source weights these weights were converted from.
For example, if a model's weights were converted from the pytorch_state_dict format to torchscript,
The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights.
All weight entries except one (the initial set of weights resulting from training the model),
need to have this field.
source
pydantic-field
¤
source: Annotated[
FileSource, AfterValidator(wo_special_file_name)
]
The multi-file weights. All required files/folders should be a zip archive.
tensorflow_version
pydantic-field
¤
tensorflow_version: Version
Version of the TensorFlow library used.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
310 311 312 313 314 315 316 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
302 303 304 305 306 307 308 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | |
TensorflowSavedModelBundleWeightsDescr
pydantic-model
¤
Bases: WeightsEntryDescrBase
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "Version of the TensorFlow library used."
},
"dependencies": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "environment.yaml"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Custom dependencies beyond tensorflow.\nShould include tensorflow and any version pinning has to be compatible with **tensorflow_version**."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.TensorflowSavedModelBundleWeightsDescr",
"type": "object"
}
Fields:
-
sha256(Optional[Sha256]) -
authors(Optional[List[Author]]) -
parent(Annotated[Optional[WeightsFormat], Field(examples=['pytorch_state_dict'])]) -
comment(str) -
tensorflow_version(Version) -
dependencies(Optional[FileDescr_dependencies]) -
source(Annotated[FileSource, AfterValidator(wo_special_file_name)])
Validators:
-
_validate_sha256 -
_validate
authors
pydantic-field
¤
authors: Optional[List[Author]] = None
Authors
Either the person(s) that have trained this model resulting in the original weights file.
(If this is the initial weights entry, i.e. it does not have a parent)
Or the person(s) who have converted the weights to this weights format.
(If this is a child weight, i.e. it has a parent field)
comment
pydantic-field
¤
comment: str = ''
A comment about this weights entry, for example how these weights were created.
dependencies
pydantic-field
¤
dependencies: Optional[FileDescr_dependencies] = None
Custom dependencies beyond tensorflow. Should include tensorflow and any version pinning has to be compatible with tensorflow_version.
parent
pydantic-field
¤
parent: Annotated[
Optional[WeightsFormat],
Field(examples=["pytorch_state_dict"]),
] = None
The source weights these weights were converted from.
For example, if a model's weights were converted from the pytorch_state_dict format to torchscript,
The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights.
All weight entries except one (the initial set of weights resulting from training the model),
need to have this field.
source
pydantic-field
¤
source: Annotated[
FileSource, AfterValidator(wo_special_file_name)
]
The multi-file weights. All required files/folders should be a zip archive.
tensorflow_version
pydantic-field
¤
tensorflow_version: Version
Version of the TensorFlow library used.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
310 311 312 313 314 315 316 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
302 303 304 305 306 307 308 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | |
TimeAxisBase
pydantic-model
¤
Bases: AxisBase
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"type"
],
"title": "model.v0_5.TimeAxisBase",
"type": "object"
}
Fields:
-
description(Annotated[str, MaxLen(128)]) -
type(Literal['time']) -
id(NonBatchAxisId) -
unit(Optional[TimeUnit]) -
scale(Annotated[float, Gt(0)])
description
pydantic-field
¤
description: Annotated[str, MaxLen(128)] = ''
A short description of this axis beyond its type and id.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
TimeInputAxis
pydantic-model
¤
Bases: TimeAxisBase, _WithInputAxisSize
Show JSON schema:
{
"$defs": {
"ParameterizedSize": {
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes ([ParameterizedSize][])\n- reference to another axis with an optional offset ([SizeReference][])",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a [SizeReference][] to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeInputAxis",
"type": "object"
}
Fields:
-
size(Annotated[Union[Annotated[int, Gt(0)], ParameterizedSize, SizeReference], Field(examples=[10, ParameterizedSize(min=32, step=16).model_dump(mode='json'), SizeReference(tensor_id=TensorId('t'), axis_id=AxisId('a'), offset=5).model_dump(mode='json')])]) -
id(NonBatchAxisId) -
description(Annotated[str, MaxLen(128)]) -
type(Literal['time']) -
unit(Optional[TimeUnit]) -
scale(Annotated[float, Gt(0)]) -
concatenable(bool)
concatenable
pydantic-field
¤
concatenable: bool = False
If a model has a concatenable input axis, it can be processed blockwise,
splitting a longer sample axis into blocks matching its input tensor description.
Output axes are concatenable if they have a SizeReference to a concatenable
input axis.
description
pydantic-field
¤
description: Annotated[str, MaxLen(128)] = ''
A short description of this axis beyond its type and id.
size
pydantic-field
¤
size: Annotated[
Union[
Annotated[int, Gt(0)],
ParameterizedSize,
SizeReference,
],
Field(
examples=[
10,
ParameterizedSize(min=32, step=16).model_dump(
mode="json"
),
SizeReference(
tensor_id=TensorId("t"),
axis_id=AxisId("a"),
offset=5,
).model_dump(mode="json"),
]
),
]
The size/length of this axis can be specified as - fixed integer - parameterized series of valid sizes (ParameterizedSize) - reference to another axis with an optional offset (SizeReference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
TimeOutputAxis
pydantic-model
¤
Bases: TimeAxisBase, _WithOutputAxisSize
Show JSON schema:
{
"$defs": {
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxis",
"type": "object"
}
Fields:
-
size(Annotated[Union[Annotated[int, Gt(0)], SizeReference], Field(examples=[10, SizeReference(tensor_id=TensorId('t'), axis_id=AxisId('a'), offset=5).model_dump(mode='json')])]) -
id(NonBatchAxisId) -
description(Annotated[str, MaxLen(128)]) -
type(Literal['time']) -
unit(Optional[TimeUnit]) -
scale(Annotated[float, Gt(0)])
description
pydantic-field
¤
description: Annotated[str, MaxLen(128)] = ''
A short description of this axis beyond its type and id.
size
pydantic-field
¤
size: Annotated[
Union[Annotated[int, Gt(0)], SizeReference],
Field(
examples=[
10,
SizeReference(
tensor_id=TensorId("t"),
axis_id=AxisId("a"),
offset=5,
).model_dump(mode="json"),
]
),
]
The size/length of this axis can be specified as - fixed integer - reference to another axis with an optional offset (see SizeReference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
TimeOutputAxisWithHalo
pydantic-model
¤
Bases: TimeAxisBase, WithHalo
Show JSON schema:
{
"$defs": {
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxisWithHalo",
"type": "object"
}
Fields:
-
halo(Annotated[int, Ge(1)]) -
size(Annotated[SizeReference, Field(examples=[10, SizeReference(tensor_id=TensorId('t'), axis_id=AxisId('a'), offset=5).model_dump(mode='json')])]) -
id(NonBatchAxisId) -
description(Annotated[str, MaxLen(128)]) -
type(Literal['time']) -
unit(Optional[TimeUnit]) -
scale(Annotated[float, Gt(0)])
description
pydantic-field
¤
description: Annotated[str, MaxLen(128)] = ''
A short description of this axis beyond its type and id.
halo
pydantic-field
¤
halo: Annotated[int, Ge(1)]
The halo should be cropped from the output tensor to avoid boundary effects.
It is to be cropped from both sides, i.e. size_after_crop = size - 2 * halo.
To document a halo that is already cropped by the model use size.offset instead.
size
pydantic-field
¤
size: Annotated[
SizeReference,
Field(
examples=[
10,
SizeReference(
tensor_id=TensorId("t"),
axis_id=AxisId("a"),
offset=5,
).model_dump(mode="json"),
]
),
]
reference to another axis with an optional offset (see SizeReference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
TorchscriptWeightsDescr
pydantic-model
¤
Bases: WeightsEntryDescrBase
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"pytorch_version": {
"$ref": "#/$defs/Version",
"description": "Version of the PyTorch library used."
}
},
"required": [
"source",
"pytorch_version"
],
"title": "model.v0_5.TorchscriptWeightsDescr",
"type": "object"
}
Fields:
-
source(Annotated[FileSource, AfterValidator(wo_special_file_name)]) -
sha256(Optional[Sha256]) -
authors(Optional[List[Author]]) -
parent(Annotated[Optional[WeightsFormat], Field(examples=['pytorch_state_dict'])]) -
comment(str) -
pytorch_version(Version)
Validators:
-
_validate_sha256 -
_validate
authors
pydantic-field
¤
authors: Optional[List[Author]] = None
Authors
Either the person(s) that have trained this model resulting in the original weights file.
(If this is the initial weights entry, i.e. it does not have a parent)
Or the person(s) who have converted the weights to this weights format.
(If this is a child weight, i.e. it has a parent field)
comment
pydantic-field
¤
comment: str = ''
A comment about this weights entry, for example how these weights were created.
parent
pydantic-field
¤
parent: Annotated[
Optional[WeightsFormat],
Field(examples=["pytorch_state_dict"]),
] = None
The source weights these weights were converted from.
For example, if a model's weights were converted from the pytorch_state_dict format to torchscript,
The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights.
All weight entries except one (the initial set of weights resulting from training the model),
need to have this field.
source
pydantic-field
¤
source: Annotated[
FileSource, AfterValidator(wo_special_file_name)
]
Source of the weights file.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
310 311 312 313 314 315 316 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
302 303 304 305 306 307 308 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | |
TrainingDetails
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"$defs": {
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
}
},
"additionalProperties": true,
"properties": {
"training_preprocessing": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Detailed image preprocessing steps during model training:\n\nMention:\n- *Normalization methods*\n- *Augmentation strategies*\n- *Resizing/resampling procedures*\n- *Artifact handling*",
"title": "Training Preprocessing"
},
"training_epochs": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Number of training epochs.",
"title": "Training Epochs"
},
"training_batch_size": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Batch size used in training.",
"title": "Training Batch Size"
},
"initial_learning_rate": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Initial learning rate used in training.",
"title": "Initial Learning Rate"
},
"learning_rate_schedule": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Learning rate schedule used in training.",
"title": "Learning Rate Schedule"
},
"loss_function": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Loss function used in training, e.g. nn.MSELoss.",
"title": "Loss Function"
},
"loss_function_kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `loss_function`",
"title": "Loss Function Kwargs",
"type": "object"
},
"optimizer": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "optimizer, e.g. torch.optim.Adam",
"title": "Optimizer"
},
"optimizer_kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `optimizer`",
"title": "Optimizer Kwargs",
"type": "object"
},
"regularization": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Regularization techniques used during training, e.g. drop-out or weight decay.",
"title": "Regularization"
},
"training_duration": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Total training duration in hours.",
"title": "Training Duration"
}
},
"title": "model.v0_5.TrainingDetails",
"type": "object"
}
Fields:
-
training_preprocessing(Optional[str]) -
training_epochs(Optional[float]) -
training_batch_size(Optional[float]) -
initial_learning_rate(Optional[float]) -
learning_rate_schedule(Optional[str]) -
loss_function(Optional[str]) -
loss_function_kwargs(Dict[str, YamlValue]) -
optimizer(Optional[str]) -
optimizer_kwargs(Dict[str, YamlValue]) -
regularization(Optional[str]) -
training_duration(Optional[float])
initial_learning_rate
pydantic-field
¤
initial_learning_rate: Optional[float] = None
Initial learning rate used in training.
learning_rate_schedule
pydantic-field
¤
learning_rate_schedule: Optional[str] = None
Learning rate schedule used in training.
loss_function
pydantic-field
¤
loss_function: Optional[str] = None
Loss function used in training, e.g. nn.MSELoss.
loss_function_kwargs
pydantic-field
¤
loss_function_kwargs: Dict[str, YamlValue]
key word arguments for the loss_function
optimizer_kwargs
pydantic-field
¤
optimizer_kwargs: Dict[str, YamlValue]
key word arguments for the optimizer
regularization
pydantic-field
¤
regularization: Optional[str] = None
Regularization techniques used during training, e.g. drop-out or weight decay.
training_batch_size
pydantic-field
¤
training_batch_size: Optional[float] = None
Batch size used in training.
training_duration
pydantic-field
¤
training_duration: Optional[float] = None
Total training duration in hours.
training_preprocessing
pydantic-field
¤
training_preprocessing: Optional[str] = None
Detailed image preprocessing steps during model training:
Mention: - Normalization methods - Augmentation strategies - Resizing/resampling procedures - Artifact handling
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
Uploader
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"email": {
"description": "Email",
"format": "email",
"title": "Email",
"type": "string"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "name",
"title": "Name"
}
},
"required": [
"email"
],
"title": "generic.v0_2.Uploader",
"type": "object"
}
Fields:
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
Version
¤
Bases: RootModel[Union[str, int, float]]
flowchart TD
bioimageio.spec.model.v0_5.Version[Version]
click bioimageio.spec.model.v0_5.Version href "" "bioimageio.spec.model.v0_5.Version"
wraps a packaging.version.Version instance for validation in pydantic models
Methods:
| Name | Description |
|---|---|
__eq__ |
|
__lt__ |
|
__str__ |
|
model_post_init |
set |
Attributes:
| Name | Type | Description |
|---|---|---|
base_version |
str
|
The "base version" of the version. |
dev |
Optional[int]
|
The development number of the version. |
epoch |
int
|
The epoch of the version. |
is_devrelease |
bool
|
Whether this version is a development release. |
is_postrelease |
bool
|
Whether this version is a post-release. |
is_prerelease |
bool
|
Whether this version is a pre-release. |
local |
Optional[str]
|
The local version segment of the version. |
major |
int
|
The first item of :attr: |
micro |
int
|
The third item of :attr: |
minor |
int
|
The second item of :attr: |
post |
Optional[int]
|
The post-release number of the version. |
pre |
Optional[Tuple[str, int]]
|
The pre-release segment of the version. |
public |
str
|
The public portion of the version. |
release |
Tuple[int, ...]
|
The components of the "release" segment of the version. |
base_version
property
¤
base_version: str
The "base version" of the version.
>>> Version("1.2.3").base_version
'1.2.3'
>>> Version("1.2.3+abc").base_version
'1.2.3'
>>> Version("1!1.2.3+abc.dev1").base_version
'1!1.2.3'
The "base version" is the public version of the project without any pre or post release markers.
dev
property
¤
dev: Optional[int]
The development number of the version.
>>> print(Version("1.2.3").dev)
None
>>> Version("1.2.3.dev1").dev
1
epoch
property
¤
epoch: int
The epoch of the version.
>>> Version("2.0.0").epoch
0
>>> Version("1!2.0.0").epoch
1
is_devrelease
property
¤
is_devrelease: bool
Whether this version is a development release.
>>> Version("1.2.3").is_devrelease
False
>>> Version("1.2.3.dev1").is_devrelease
True
is_postrelease
property
¤
is_postrelease: bool
Whether this version is a post-release.
>>> Version("1.2.3").is_postrelease
False
>>> Version("1.2.3.post1").is_postrelease
True
is_prerelease
property
¤
is_prerelease: bool
Whether this version is a pre-release.
>>> Version("1.2.3").is_prerelease
False
>>> Version("1.2.3a1").is_prerelease
True
>>> Version("1.2.3b1").is_prerelease
True
>>> Version("1.2.3rc1").is_prerelease
True
>>> Version("1.2.3dev1").is_prerelease
True
local
property
¤
local: Optional[str]
The local version segment of the version.
>>> print(Version("1.2.3").local)
None
>>> Version("1.2.3+abc").local
'abc'
major
property
¤
major: int
The first item of :attr:release or 0 if unavailable.
>>> Version("1.2.3").major
1
micro
property
¤
micro: int
The third item of :attr:release or 0 if unavailable.
>>> Version("1.2.3").micro
3
>>> Version("1").micro
0
minor
property
¤
minor: int
The second item of :attr:release or 0 if unavailable.
>>> Version("1.2.3").minor
2
>>> Version("1").minor
0
post
property
¤
post: Optional[int]
The post-release number of the version.
>>> print(Version("1.2.3").post)
None
>>> Version("1.2.3.post1").post
1
pre
property
¤
pre: Optional[Tuple[str, int]]
The pre-release segment of the version.
>>> print(Version("1.2.3").pre)
None
>>> Version("1.2.3a1").pre
('a', 1)
>>> Version("1.2.3b1").pre
('b', 1)
>>> Version("1.2.3rc1").pre
('rc', 1)
public
property
¤
public: str
The public portion of the version.
>>> Version("1.2.3").public
'1.2.3'
>>> Version("1.2.3+abc").public
'1.2.3'
>>> Version("1.2.3+abc.dev1").public
'1.2.3'
release
property
¤
release: Tuple[int, ...]
The components of the "release" segment of the version.
>>> Version("1.2.3").release
(1, 2, 3)
>>> Version("2.0.0").release
(2, 0, 0)
>>> Version("1!2.0.0.post0").release
(2, 0, 0)
Includes trailing zeroes but not the epoch or any pre-release / development / post-release suffixes.
__eq__
¤
__eq__(other: Any)
Source code in src/bioimageio/spec/_internal/version_type.py
28 29 30 31 | |
__lt__
¤
__lt__(other: Any)
Source code in src/bioimageio/spec/_internal/version_type.py
22 23 24 25 26 | |
__str__
¤
__str__()
Source code in src/bioimageio/spec/_internal/version_type.py
14 15 | |
model_post_init
¤
model_post_init(__context: Any) -> None
set _version attribute @private
Source code in src/bioimageio/spec/_internal/version_type.py
17 18 19 20 | |
WeightsDescr
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"$defs": {
"ArchitectureFromFileDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Architecture source file",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
}
},
"required": [
"source",
"callable"
],
"title": "model.v0_5.ArchitectureFromFileDescr",
"type": "object"
},
"ArchitectureFromLibraryDescr": {
"additionalProperties": false,
"properties": {
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
},
"import_from": {
"description": "Where to import the callable from, i.e. `from <import_from> import <callable>`",
"title": "Import From",
"type": "string"
}
},
"required": [
"callable",
"import_from"
],
"title": "model.v0_5.ArchitectureFromLibraryDescr",
"type": "object"
},
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"KerasHdf5WeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "TensorFlow version used to create these weights."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.KerasHdf5WeightsDescr",
"type": "object"
},
"OnnxWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"opset_version": {
"description": "ONNX opset version",
"minimum": 7,
"title": "Opset Version",
"type": "integer"
},
"external_data": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "weights.onnx.data"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Source of the external ONNX data file holding the weights.\n(If present **source** holds the ONNX architecture without weights)."
}
},
"required": [
"source",
"opset_version"
],
"title": "model.v0_5.OnnxWeightsDescr",
"type": "object"
},
"PytorchStateDictWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"architecture": {
"anyOf": [
{
"$ref": "#/$defs/ArchitectureFromFileDescr"
},
{
"$ref": "#/$defs/ArchitectureFromLibraryDescr"
}
],
"title": "Architecture"
},
"pytorch_version": {
"$ref": "#/$defs/Version",
"description": "Version of the PyTorch library used.\nIf `architecture.depencencies` is specified it has to include pytorch and any version pinning has to be compatible."
},
"dependencies": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "environment.yaml"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Custom depencies beyond pytorch described in a Conda environment file.\nAllows to specify custom dependencies, see conda docs:\n- [Exporting an environment file across platforms](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#exporting-an-environment-file-across-platforms)\n- [Creating an environment file manually](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually)\n\nThe conda environment file should include pytorch and any version pinning has to be compatible with\n**pytorch_version**."
}
},
"required": [
"source",
"architecture",
"pytorch_version"
],
"title": "model.v0_5.PytorchStateDictWeightsDescr",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"TensorflowJsWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "Version of the TensorFlow library used."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.TensorflowJsWeightsDescr",
"type": "object"
},
"TensorflowSavedModelBundleWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "Version of the TensorFlow library used."
},
"dependencies": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "environment.yaml"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Custom dependencies beyond tensorflow.\nShould include tensorflow and any version pinning has to be compatible with **tensorflow_version**."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.TensorflowSavedModelBundleWeightsDescr",
"type": "object"
},
"TorchscriptWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"pytorch_version": {
"$ref": "#/$defs/Version",
"description": "Version of the PyTorch library used."
}
},
"required": [
"source",
"pytorch_version"
],
"title": "model.v0_5.TorchscriptWeightsDescr",
"type": "object"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
},
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
}
},
"additionalProperties": false,
"properties": {
"keras_hdf5": {
"anyOf": [
{
"$ref": "#/$defs/KerasHdf5WeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"onnx": {
"anyOf": [
{
"$ref": "#/$defs/OnnxWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"pytorch_state_dict": {
"anyOf": [
{
"$ref": "#/$defs/PytorchStateDictWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"tensorflow_js": {
"anyOf": [
{
"$ref": "#/$defs/TensorflowJsWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"tensorflow_saved_model_bundle": {
"anyOf": [
{
"$ref": "#/$defs/TensorflowSavedModelBundleWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"torchscript": {
"anyOf": [
{
"$ref": "#/$defs/TorchscriptWeightsDescr"
},
{
"type": "null"
}
],
"default": null
}
},
"title": "model.v0_5.WeightsDescr",
"type": "object"
}
Fields:
-
keras_hdf5(Optional[KerasHdf5WeightsDescr]) -
onnx(Optional[OnnxWeightsDescr]) -
pytorch_state_dict(Optional[PytorchStateDictWeightsDescr]) -
tensorflow_js(Optional[TensorflowJsWeightsDescr]) -
tensorflow_saved_model_bundle(Optional[TensorflowSavedModelBundleWeightsDescr]) -
torchscript(Optional[TorchscriptWeightsDescr])
Validators:
pytorch_state_dict
pydantic-field
¤
pytorch_state_dict: Optional[
PytorchStateDictWeightsDescr
] = None
tensorflow_saved_model_bundle
pydantic-field
¤
tensorflow_saved_model_bundle: Optional[
TensorflowSavedModelBundleWeightsDescr
] = None
__getitem__
¤
__getitem__(
key: Literal[
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript",
],
)
Source code in src/bioimageio/spec/model/v0_5.py
2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 | |
__setitem__
¤
__setitem__(
key: Literal["keras_hdf5"],
value: Optional[KerasHdf5WeightsDescr],
) -> None
__setitem__(
key: Literal["onnx"], value: Optional[OnnxWeightsDescr]
) -> None
__setitem__(
key: Literal["pytorch_state_dict"],
value: Optional[PytorchStateDictWeightsDescr],
) -> None
__setitem__(
key: Literal["tensorflow_js"],
value: Optional[TensorflowJsWeightsDescr],
) -> None
__setitem__(
key: Literal["tensorflow_saved_model_bundle"],
value: Optional[TensorflowSavedModelBundleWeightsDescr],
) -> None
__setitem__(
key: Literal["torchscript"],
value: Optional[TorchscriptWeightsDescr],
) -> None
__setitem__(
key: Literal[
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript",
],
value: Optional[SpecificWeightsDescr],
)
Source code in src/bioimageio/spec/model/v0_5.py
2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 | |
check_entries
pydantic-validator
¤
check_entries() -> Self
Source code in src/bioimageio/spec/model/v0_5.py
2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
WeightsEntryDescrBase
pydantic-model
¤
Bases: FileDescr
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
}
},
"required": [
"source"
],
"title": "model.v0_5.WeightsEntryDescrBase",
"type": "object"
}
Fields:
-
sha256(Optional[Sha256]) -
source(Annotated[FileSource, AfterValidator(wo_special_file_name)]) -
authors(Optional[List[Author]]) -
parent(Annotated[Optional[WeightsFormat], Field(examples=['pytorch_state_dict'])]) -
comment(str)
Validators:
-
_validate_sha256 -
_validate
authors
pydantic-field
¤
authors: Optional[List[Author]] = None
Authors
Either the person(s) that have trained this model resulting in the original weights file.
(If this is the initial weights entry, i.e. it does not have a parent)
Or the person(s) who have converted the weights to this weights format.
(If this is a child weight, i.e. it has a parent field)
comment
pydantic-field
¤
comment: str = ''
A comment about this weights entry, for example how these weights were created.
parent
pydantic-field
¤
parent: Annotated[
Optional[WeightsFormat],
Field(examples=["pytorch_state_dict"]),
] = None
The source weights these weights were converted from.
For example, if a model's weights were converted from the pytorch_state_dict format to torchscript,
The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights.
All weight entries except one (the initial set of weights resulting from training the model),
need to have this field.
source
pydantic-field
¤
source: Annotated[
FileSource, AfterValidator(wo_special_file_name)
]
Source of the weights file.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
310 311 312 313 314 315 316 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
302 303 304 305 306 307 308 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | |
WithHalo
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"$defs": {
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see [SizeReference][])",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
}
},
"required": [
"halo",
"size"
],
"title": "model.v0_5.WithHalo",
"type": "object"
}
Fields:
-
halo(Annotated[int, Ge(1)]) -
size(Annotated[SizeReference, Field(examples=[10, SizeReference(tensor_id=TensorId('t'), axis_id=AxisId('a'), offset=5).model_dump(mode='json')])])
halo
pydantic-field
¤
halo: Annotated[int, Ge(1)]
The halo should be cropped from the output tensor to avoid boundary effects.
It is to be cropped from both sides, i.e. size_after_crop = size - 2 * halo.
To document a halo that is already cropped by the model use size.offset instead.
size
pydantic-field
¤
size: Annotated[
SizeReference,
Field(
examples=[
10,
SizeReference(
tensor_id=TensorId("t"),
axis_id=AxisId("a"),
offset=5,
).model_dump(mode="json"),
]
),
]
reference to another axis with an optional offset (see SizeReference)
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
ZeroMeanUnitVarianceDescr
pydantic-model
¤
Bases: NodeWithExplicitlySetFields
Subtract mean and divide by variance.
Examples:
Subtract tensor mean and variance - in YAML
preprocessing:
- id: zero_mean_unit_variance
>>> preprocessing = [ZeroMeanUnitVarianceDescr()]
Show JSON schema:
{
"$defs": {
"ZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for [ZeroMeanUnitVarianceDescr][]",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Subtract mean and divide by variance.\n\nExamples:\n Subtract tensor mean and variance\n - in YAML\n ```yaml\n preprocessing:\n - id: zero_mean_unit_variance\n ```\n - in Python\n >>> preprocessing = [ZeroMeanUnitVarianceDescr()]",
"properties": {
"id": {
"const": "zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ZeroMeanUnitVarianceDescr",
"type": "object"
}
Fields:
-
id(Literal['zero_mean_unit_variance']) -
kwargs(ZeroMeanUnitVarianceKwargs)
implemented_id
class-attribute
¤
implemented_id: Literal["zero_mean_unit_variance"] = (
"zero_mean_unit_variance"
)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
ZeroMeanUnitVarianceKwargs
pydantic-model
¤
Bases: KwargsNode
key word arguments for ZeroMeanUnitVarianceDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for [ZeroMeanUnitVarianceDescr][]",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
"type": "object"
}
Fields:
-
axes(Annotated[Optional[Sequence[AxisId]], Field(examples=[('batch', 'x', 'y')])]) -
eps(Annotated[float, Interval(gt=0, le=0.1)])
axes
pydantic-field
¤
The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.
For example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')
resulting in a tensor of equal shape normalized per channel, specify axes=('batch', 'x', 'y').
To normalize each sample independently leave out the 'batch' axis.
Default: Scale all axes jointly.
eps
pydantic-field
¤
eps: Annotated[float, Interval(gt=0, le=0.1)] = 1e-06
epsilon for numeric stability: out = (tensor - mean) / (std + eps).
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
426 427 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
420 421 422 423 424 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
417 418 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
extra: Optional[
Literal["allow", "ignore", "forbid"]
] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: Optional[bool] = None,
by_name: Optional[bool] = None,
) -> Self
Validate a pydantic model instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
Union[Any, Mapping[str, Any]]
|
The object to validate. |
required |
|
Optional[bool]
|
Whether to raise an exception on invalid fields. |
None
|
|
Optional[bool]
|
Whether to extract data from object attributes. |
None
|
|
Union[ValidationContext, Mapping[str, Any], None]
|
Additional context to pass to the validator. |
None
|
Raises:
| Type | Description |
|---|---|
ValidationError
|
If the object failed validation. |
Returns:
| Type | Description |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |
convert_axes
¤
convert_axes(
axes: str,
*,
shape: Union[
Sequence[int],
_ParameterizedInputShape_v0_4,
_ImplicitOutputShape_v0_4,
],
tensor_type: Literal["input", "output"],
halo: Optional[Sequence[int]],
size_refs: Mapping[_TensorName_v0_4, Mapping[str, int]],
)
Source code in src/bioimageio/spec/model/v0_5.py
1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 | |
generate_covers
¤
generate_covers(
inputs: Sequence[Tuple[InputTensorDescr, NDArray[Any]]],
outputs: Sequence[
Tuple[OutputTensorDescr, NDArray[Any]]
],
) -> List[Path]
Source code in src/bioimageio/spec/model/v0_5.py
4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 4166 4167 4168 4169 4170 4171 4172 4173 4174 4175 4176 4177 4178 4179 4180 4181 4182 4183 4184 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 4200 4201 4202 4203 4204 4205 4206 4207 4208 4209 4210 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 | |
validate_tensors
¤
validate_tensors(
tensors: Mapping[
TensorId, Tuple[TensorDescr, Optional[NDArray[Any]]]
],
tensor_origin: Literal["test_tensor"],
)
Source code in src/bioimageio/spec/model/v0_5.py
2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 | |