v0_5
¤
| CLASS | DESCRIPTION |
|---|---|
ArchitectureFromFileDescr |
|
ArchitectureFromLibraryDescr |
|
Author |
|
AxisBase |
|
AxisId |
|
BadgeDescr |
A custom badge |
BatchAxis |
|
BinarizeAlongAxisKwargs |
key word arguments for |
BinarizeDescr |
Binarize the tensor with a fixed threshold. |
BinarizeKwargs |
key word arguments for |
BioimageioConfig |
|
CallableFromDepencency |
|
ChannelAxis |
|
CiteEntry |
A citation that should be referenced in work using this resource. |
ClipDescr |
Set tensor values below min to min and above max to max. |
ClipKwargs |
key word arguments for |
Config |
|
Converter |
|
DataDependentSize |
|
DatasetDescr |
A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage |
DatasetDescr02 |
A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage |
DatasetId |
|
Datetime |
Timestamp in ISO 8601 format |
DeprecatedLicenseId |
|
Doi |
A digital object identifier, see https://www.doi.org/ |
EnsureDtypeDescr |
Cast the tensor data type to |
EnsureDtypeKwargs |
key word arguments for |
FileDescr |
A file description |
FixedZeroMeanUnitVarianceAlongAxisKwargs |
key word arguments for |
FixedZeroMeanUnitVarianceDescr |
Subtract a given mean and divide by the standard deviation. |
FixedZeroMeanUnitVarianceKwargs |
key word arguments for |
GenericModelDescrBase |
Base for all resource descriptions including of model descriptions |
HttpUrl |
A URL with the HTTP or HTTPS scheme. |
Identifier |
|
IndexAxisBase |
|
IndexInputAxis |
|
IndexOutputAxis |
|
InputTensorDescr |
|
IntervalOrRatioDataDescr |
|
InvalidDescr |
A representation of an invalid resource description |
KerasHdf5WeightsDescr |
|
LicenseId |
|
LinkedDataset |
Reference to a bioimage.io dataset. |
LinkedDataset02 |
Reference to a bioimage.io dataset. |
LinkedModel |
Reference to a bioimage.io model. |
LinkedResource |
Reference to a bioimage.io resource |
LinkedResourceBase |
|
LowerCaseIdentifier |
|
Maintainer |
|
ModelDescr |
Specification of the fields used in a bioimage.io-compliant RDF to describe AI models with pretrained weights. |
ModelId |
|
Node |
|
NodeWithExplicitlySetFields |
|
NominalOrOrdinalDataDescr |
|
OnnxWeightsDescr |
|
OrcidId |
An ORCID identifier, see https://orcid.org/ |
OutputTensorDescr |
|
ParameterizedSize |
Describes a range of valid tensor axis sizes as |
ProcessingDescrBase |
processing base class |
ProcessingKwargs |
base class for pre-/postprocessing key word arguments |
PytorchStateDictWeightsDescr |
|
RelativeFilePath |
A path relative to the |
ReproducibilityTolerance |
Describes what small numerical differences -- if any -- may be tolerated |
ResourceId |
|
RestrictCharacters |
|
RunMode |
|
ScaleLinearAlongAxisKwargs |
Key word arguments for |
ScaleLinearDescr |
Fixed linear scaling. |
ScaleLinearKwargs |
Key word arguments for |
ScaleMeanVarianceDescr |
Scale a tensor's data distribution to match another tensor's mean/std. |
ScaleMeanVarianceKwargs |
key word arguments for |
ScaleRangeDescr |
Scale with percentiles. |
ScaleRangeKwargs |
key word arguments for |
Sha256 |
A SHA-256 hash value |
SiUnit |
An SI unit |
SigmoidDescr |
The logistic sigmoid function, a.k.a. expit function. |
SizeReference |
A tensor axis size (extent in pixels/frames) defined in relation to a reference axis. |
SoftmaxDescr |
The softmax function. |
SoftmaxKwargs |
key word arguments for |
SpaceAxisBase |
|
SpaceInputAxis |
|
SpaceOutputAxis |
|
SpaceOutputAxisWithHalo |
|
TensorDescrBase |
|
TensorId |
|
TensorflowJsWeightsDescr |
|
TensorflowSavedModelBundleWeightsDescr |
|
TimeAxisBase |
|
TimeInputAxis |
|
TimeOutputAxis |
|
TimeOutputAxisWithHalo |
|
TorchscriptWeightsDescr |
|
Uploader |
|
Version |
wraps a packaging.version.Version instance for validation in pydantic models |
WeightsDescr |
|
WeightsEntryDescrBase |
|
WithHalo |
|
WithSuffix |
|
ZeroMeanUnitVarianceDescr |
Subtract mean and divide by variance. |
ZeroMeanUnitVarianceKwargs |
key word arguments for |
_ArchFileConv |
|
_ArchLibConv |
|
_ArchitectureCallableDescr |
|
_Author_v0_4 |
|
_AxisSizes |
the lenghts of all axes of model inputs and outputs |
_BinarizeDescr_v0_4 |
BinarizeDescr the tensor with a fixed |
_CallableFromDepencency_v0_4 |
|
_CallableFromFile_v0_4 |
|
_ClipDescr_v0_4 |
Clip tensor values to a range. |
_DataDepSize |
|
_ImplicitOutputShape_v0_4 |
Output tensor shape depending on an input tensor shape. |
_InputTensorConv |
|
_InputTensorDescr_v0_4 |
|
_ModelConv |
|
_ModelDescr_v0_4 |
Specification of the fields used in a bioimage.io-compliant RDF that describes AI models with pretrained weights. |
_OutputTensorConv |
|
_OutputTensorDescr_v0_4 |
|
_ParameterizedInputShape_v0_4 |
A sequence of valid shapes given by |
_ScaleLinearDescr_v0_4 |
Fixed linear scaling. |
_ScaleMeanVarianceDescr_v0_4 |
Scale the tensor s.t. its mean and variance match a reference tensor. |
_ScaleRangeDescr_v0_4 |
Scale with percentiles. |
_SigmoidDescr_v0_4 |
The logistic sigmoid funciton, a.k.a. expit function. |
_TensorName_v0_4 |
|
_TensorSizes |
_AxisSizes as nested dicts |
_WithInputAxisSize |
|
_WithOutputAxisSize |
|
_ZeroMeanUnitVarianceDescr_v0_4 |
Subtract mean and divide by variance. |
| FUNCTION | DESCRIPTION |
|---|---|
_axes_letters_to_ids |
|
_convert_proc |
|
_get_complement_v04_axis |
|
_get_halo_axis_discriminator_value |
|
_is_batch |
|
_is_not_batch |
|
_normalize_axis_id |
|
convert_axes |
|
extract_file_name |
|
generate_covers |
|
get_reader |
Open a file |
get_validation_context |
Get the currently active validation context (or a default) |
is_dict |
to avoid Dict[Unknown, Unknown] |
is_sequence |
to avoid Sequence[Unknown] |
issue_warning |
|
load_array |
|
package_file_descr_serializer |
|
package_weights |
|
validate_tensors |
|
warn |
treat a type or its annotation metadata as a warning condition |
wo_special_file_name |
|
ANY_AXIS_TYPES
module-attribute
¤
ANY_AXIS_TYPES = INPUT_AXIS_TYPES + OUTPUT_AXIS_TYPES
intended for isinstance comparisons in py<3.10
AxisType
module-attribute
¤
AxisType = Literal[
"batch", "channel", "index", "time", "space"
]
BioimageioYamlContent
module-attribute
¤
BioimageioYamlContent = Dict[str, YamlValue]
DTYPE_LIMITS
module-attribute
¤
DTYPE_LIMITS = MappingProxyType(
{
"float32": _DtypeLimit(-3.4028235e38, 3.4028235e38),
"float64": _DtypeLimit(
-1.7976931348623157e308, 1.7976931348623157e308
),
"uint8": _DtypeLimit(0, 255),
"int8": _DtypeLimit(-128, 127),
"uint16": _DtypeLimit(0, 65535),
"int16": _DtypeLimit(-32768, 32767),
"uint32": _DtypeLimit(0, 4294967295),
"int32": _DtypeLimit(-2147483648, 2147483647),
"uint64": _DtypeLimit(0, 18446744073709551615),
"int64": _DtypeLimit(
-9223372036854775808, 9223372036854775807
),
}
)
FileDescr_
module-attribute
¤
FileDescr_ = FileDescr
A FileDescr whose source is included when packaging the resource.
FileSource
module-attribute
¤
FileSource = Union[HttpUrl, RelativeFilePath, FilePath]
FileSource_
module-attribute
¤
FileSource_ = FileSource
A file source that is included when packaging the resource.
INPUT_AXIS_TYPES
module-attribute
¤
INPUT_AXIS_TYPES = (
BatchAxis,
ChannelAxis,
IndexInputAxis,
TimeInputAxis,
SpaceInputAxis,
)
intended for isinstance comparisons in py<3.10
InputAxis
module-attribute
¤
InputAxis = _InputAxisUnion
IntervalOrRatioDType
module-attribute
¤
IntervalOrRatioDType = Literal[
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
]
NominalOrOrdinalDType
module-attribute
¤
NominalOrOrdinalDType = Literal[
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool",
]
NotEmpty
module-attribute
¤
NotEmpty = S
OUTPUT_AXIS_TYPES
module-attribute
¤
OUTPUT_AXIS_TYPES = (
BatchAxis,
ChannelAxis,
IndexOutputAxis,
TimeOutputAxis,
TimeOutputAxisWithHalo,
SpaceOutputAxis,
SpaceOutputAxisWithHalo,
)
intended for isinstance comparisons in py<3.10
OutputAxis
module-attribute
¤
OutputAxis = _OutputAxisUnion
ParameterizedSize_N
module-attribute
¤
ParameterizedSize_N = int
Annotates an integer to calculate a concrete axis size from a ParameterizedSize.
PostprocessingDescr
module-attribute
¤
PostprocessingDescr = Union[
BinarizeDescr,
ClipDescr,
EnsureDtypeDescr,
FixedZeroMeanUnitVarianceDescr,
ScaleLinearDescr,
ScaleMeanVarianceDescr,
ScaleRangeDescr,
SigmoidDescr,
SoftmaxDescr,
ZeroMeanUnitVarianceDescr,
]
PostprocessingId
module-attribute
¤
PostprocessingId = Literal[
"binarize",
"clip",
"ensure_dtype",
"fixed_zero_mean_unit_variance",
"scale_linear",
"scale_mean_variance",
"scale_range",
"sigmoid",
"softmax",
"zero_mean_unit_variance",
]
PreprocessingDescr
module-attribute
¤
PreprocessingDescr = Union[
BinarizeDescr,
ClipDescr,
EnsureDtypeDescr,
FixedZeroMeanUnitVarianceDescr,
ScaleLinearDescr,
ScaleRangeDescr,
SigmoidDescr,
SoftmaxDescr,
ZeroMeanUnitVarianceDescr,
]
PreprocessingId
module-attribute
¤
PreprocessingId = Literal[
"binarize",
"clip",
"ensure_dtype",
"fixed_zero_mean_unit_variance",
"scale_linear",
"scale_range",
"sigmoid",
"softmax",
]
SpaceUnit
module-attribute
¤
SpaceUnit = Literal[
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter",
]
Space unit compatible to the OME-Zarr axes specification 0.5
SpecificWeightsDescr
module-attribute
¤
SpecificWeightsDescr = Union[
KerasHdf5WeightsDescr,
OnnxWeightsDescr,
PytorchStateDictWeightsDescr,
TensorflowJsWeightsDescr,
TensorflowSavedModelBundleWeightsDescr,
TorchscriptWeightsDescr,
]
TVs
module-attribute
¤
TVs = Union[
NotEmpty[List[int]],
NotEmpty[List[float]],
NotEmpty[List[bool]],
NotEmpty[List[str]],
]
TensorDataDescr
module-attribute
¤
TensorDataDescr = Union[
NominalOrOrdinalDataDescr, IntervalOrRatioDataDescr
]
TensorDescr
module-attribute
¤
TensorDescr = Union[InputTensorDescr, OutputTensorDescr]
TimeUnit
module-attribute
¤
TimeUnit = Literal[
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond",
]
Time unit compatible to the OME-Zarr axes specification 0.5
VALID_COVER_IMAGE_EXTENSIONS
module-attribute
¤
VALID_COVER_IMAGE_EXTENSIONS = (
".gif",
".jpeg",
".jpg",
".png",
".svg",
)
WeightsFormat
module-attribute
¤
WeightsFormat = Literal[
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript",
]
YamlValue
module-attribute
¤
YamlValue = Union[
YamlLeafValue,
List["YamlValue"],
Dict[YamlKey, "YamlValue"],
]
_AXIS_ID_MAP
module-attribute
¤
_AXIS_ID_MAP = {
"b": "batch",
"t": "time",
"i": "index",
"c": "channel",
}
_AXIS_TYPE_MAP
module-attribute
¤
_AXIS_TYPE_MAP: Mapping[str, AxisType] = {
"b": "batch",
"t": "time",
"i": "index",
"c": "channel",
"x": "space",
"y": "space",
"z": "space",
}
_InputAxisUnion
module-attribute
¤
_InputAxisUnion = Union[
BatchAxis,
ChannelAxis,
IndexInputAxis,
TimeInputAxis,
SpaceInputAxis,
]
_OutputAxisUnion
module-attribute
¤
_OutputAxisUnion = Union[
BatchAxis,
ChannelAxis,
IndexOutputAxis,
_TimeOutputAxisUnion,
_SpaceOutputAxisUnion,
]
_PostprocessingDescr_v0_4
module-attribute
¤
_PostprocessingDescr_v0_4 = Union[
BinarizeDescr,
ClipDescr,
ScaleLinearDescr,
SigmoidDescr,
ZeroMeanUnitVarianceDescr,
ScaleRangeDescr,
ScaleMeanVarianceDescr,
]
_PreprocessingDescr_v0_4
module-attribute
¤
_PreprocessingDescr_v0_4 = Union[
BinarizeDescr,
ClipDescr,
ScaleLinearDescr,
SigmoidDescr,
ZeroMeanUnitVarianceDescr,
ScaleRangeDescr,
]
_SpaceOutputAxisUnion
module-attribute
¤
_SpaceOutputAxisUnion = Union[
SpaceOutputAxis, SpaceOutputAxisWithHalo
]
_TimeOutputAxisUnion
module-attribute
¤
_TimeOutputAxisUnion = Union[
TimeOutputAxis, TimeOutputAxisWithHalo
]
_arch_file_conv
module-attribute
¤
_arch_file_conv = _ArchFileConv(
_CallableFromFile_v0_4, ArchitectureFromFileDescr
)
_arch_lib_conv
module-attribute
¤
_arch_lib_conv = _ArchLibConv(
_CallableFromDepencency_v0_4,
ArchitectureFromLibraryDescr,
)
_input_tensor_conv
module-attribute
¤
_input_tensor_conv = _InputTensorConv(
_InputTensorDescr_v0_4, InputTensorDescr
)
_maintainer_conv
module-attribute
¤
_maintainer_conv = _MaintainerConv(
_Maintainer_v0_2, Maintainer
)
_output_tensor_conv
module-attribute
¤
_output_tensor_conv = _OutputTensorConv(
_OutputTensorDescr_v0_4, OutputTensorDescr
)
ArchitectureFromFileDescr
pydantic-model
¤
Bases: _ArchitectureCallableDescr, FileDescr
Show JSON schema:
{
"$defs": {
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Architecture source file",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
}
},
"required": [
"source",
"callable"
],
"title": "model.v0_5.ArchitectureFromFileDescr",
"type": "object"
}
Fields:
-
sha256(Optional[Sha256]) -
callable(Identifier) -
kwargs(Dict[str, YamlValue]) -
source(FileSource)
callable
pydantic-field
¤
callable: Identifier
Identifier of the callable that returns a torch.nn.Module instance.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
306 307 308 309 310 311 312 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
298 299 300 301 302 303 304 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 | |
ArchitectureFromLibraryDescr
pydantic-model
¤
Bases: _ArchitectureCallableDescr
Show JSON schema:
{
"$defs": {
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
}
},
"additionalProperties": false,
"properties": {
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
},
"import_from": {
"description": "Where to import the callable from, i.e. `from <import_from> import <callable>`",
"title": "Import From",
"type": "string"
}
},
"required": [
"callable",
"import_from"
],
"title": "model.v0_5.ArchitectureFromLibraryDescr",
"type": "object"
}
Fields:
-
callable(Identifier) -
kwargs(Dict[str, YamlValue]) -
import_from(str)
callable
pydantic-field
¤
callable: Identifier
Identifier of the callable that returns a torch.nn.Module instance.
import_from
pydantic-field
¤
import_from: str
Where to import the callable from, i.e. from <import_from> import <callable>
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
Author
pydantic-model
¤
Bases: _Author_v0_2
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
}
Fields:
-
affiliation(Optional[str]) -
email(Optional[EmailStr]) -
orcid(Optional[OrcidId]) -
name(str) -
github_user(Optional[str])
Validators:
-
_validate_github_user→github_user
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
AxisBase
pydantic-model
¤
Bases: NodeWithExplicitlySetFields
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"id": {
"description": "An axis id unique across all axes of one tensor.",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
}
},
"required": [
"id"
],
"title": "model.v0_5.AxisBase",
"type": "object"
}
Fields:
-
id(AxisId) -
description(str)
description
pydantic-field
¤
description: str = ''
A short description of this axis beyond its type and id.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
AxisId
¤
Bases: LowerCaseIdentifier
flowchart TD
bioimageio.spec.model.v0_5.AxisId[AxisId]
bioimageio.spec._internal.types.LowerCaseIdentifier[LowerCaseIdentifier]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.types.LowerCaseIdentifier --> bioimageio.spec.model.v0_5.AxisId
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.types.LowerCaseIdentifier
click bioimageio.spec.model.v0_5.AxisId href "" "bioimageio.spec.model.v0_5.AxisId"
click bioimageio.spec._internal.types.LowerCaseIdentifier href "" "bioimageio.spec._internal.types.LowerCaseIdentifier"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
root_model |
TYPE:
|
root_model
class-attribute
¤
root_model: Type[RootModel[Any]] = RootModel[
LowerCaseIdentifierAnno
]
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
BadgeDescr
pydantic-model
¤
Bases: Node
A custom badge
Show JSON schema:
{
"$defs": {
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
}
},
"additionalProperties": false,
"description": "A custom badge",
"properties": {
"label": {
"description": "badge label to display on hover",
"examples": [
"Open in Colab"
],
"title": "Label",
"type": "string"
},
"icon": {
"anyOf": [
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "badge icon (included in bioimage.io package if not a URL)",
"examples": [
"https://colab.research.google.com/assets/colab-badge.svg"
],
"title": "Icon"
},
"url": {
"description": "target URL",
"examples": [
"https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb"
],
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
}
},
"required": [
"label",
"url"
],
"title": "generic.v0_2.BadgeDescr",
"type": "object"
}
Fields:
-
label(str) -
icon(Optional[Union[Union[FilePath, RelativeFilePath], Union[HttpUrl, pydantic.HttpUrl]]]) -
url(HttpUrl)
icon
pydantic-field
¤
icon: Optional[
Union[
Union[FilePath, RelativeFilePath],
Union[HttpUrl, pydantic.HttpUrl],
]
] = None
badge icon (included in bioimage.io package if not a URL)
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
BatchAxis
pydantic-model
¤
Bases: AxisBase
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"id": {
"default": "batch",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "batch",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"const": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
"title": "Size"
}
},
"required": [
"type"
],
"title": "model.v0_5.BatchAxis",
"type": "object"
}
Fields:
-
description(str) -
type(Literal['batch']) -
id(AxisId) -
size(Optional[Literal[1]])
description
pydantic-field
¤
description: str = ''
A short description of this axis beyond its type and id.
size
pydantic-field
¤
size: Optional[Literal[1]] = None
The batch size may be fixed to 1, otherwise (the default) it may be chosen arbitrarily depending on available memory
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
BinarizeAlongAxisKwargs
pydantic-model
¤
Bases: ProcessingKwargs
key word arguments for BinarizeDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for `BinarizeDescr`",
"properties": {
"threshold": {
"description": "The fixed threshold values along `axis`",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Threshold",
"type": "array"
},
"axis": {
"description": "The `threshold` axis",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"threshold",
"axis"
],
"title": "model.v0_5.BinarizeAlongAxisKwargs",
"type": "object"
}
Fields:
-
threshold(NotEmpty[List[float]]) -
axis(NonBatchAxisId)
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
425 426 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
419 420 421 422 423 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
416 417 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
BinarizeDescr
pydantic-model
¤
Bases: ProcessingDescrBase
Binarize the tensor with a fixed threshold.
Values above BinarizeKwargs.threshold/BinarizeAlongAxisKwargs.threshold
will be set to one, values below the threshold to zero.
Examples:
- in YAML
postprocessing: - id: binarize kwargs: axis: 'channel' threshold: [0.25, 0.5, 0.75] - in Python: >>> postprocessing = [BinarizeDescr( ... kwargs=BinarizeAlongAxisKwargs( ... axis=AxisId('channel'), ... threshold=[0.25, 0.5, 0.75], ... ) ... )]
Show JSON schema:
{
"$defs": {
"BinarizeAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for `BinarizeDescr`",
"properties": {
"threshold": {
"description": "The fixed threshold values along `axis`",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Threshold",
"type": "array"
},
"axis": {
"description": "The `threshold` axis",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"threshold",
"axis"
],
"title": "model.v0_5.BinarizeAlongAxisKwargs",
"type": "object"
},
"BinarizeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `BinarizeDescr`",
"properties": {
"threshold": {
"description": "The fixed threshold",
"title": "Threshold",
"type": "number"
}
},
"required": [
"threshold"
],
"title": "model.v0_5.BinarizeKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Binarize the tensor with a fixed threshold.\n\nValues above `BinarizeKwargs.threshold`/`BinarizeAlongAxisKwargs.threshold`\nwill be set to one, values below the threshold to zero.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: binarize\n kwargs:\n axis: 'channel'\n threshold: [0.25, 0.5, 0.75]\n ```\n- in Python:\n >>> postprocessing = [BinarizeDescr(\n ... kwargs=BinarizeAlongAxisKwargs(\n ... axis=AxisId('channel'),\n ... threshold=[0.25, 0.5, 0.75],\n ... )\n ... )]",
"properties": {
"id": {
"const": "binarize",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/BinarizeKwargs"
},
{
"$ref": "#/$defs/BinarizeAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.BinarizeDescr",
"type": "object"
}
Fields:
-
id(Literal['binarize']) -
kwargs(Union[BinarizeKwargs, BinarizeAlongAxisKwargs])
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
BinarizeKwargs
pydantic-model
¤
Bases: ProcessingKwargs
key word arguments for BinarizeDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for `BinarizeDescr`",
"properties": {
"threshold": {
"description": "The fixed threshold",
"title": "Threshold",
"type": "number"
}
},
"required": [
"threshold"
],
"title": "model.v0_5.BinarizeKwargs",
"type": "object"
}
Fields:
-
threshold(float)
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
425 426 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
419 420 421 422 423 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
416 417 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
BioimageioConfig
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"$defs": {
"ReproducibilityTolerance": {
"additionalProperties": true,
"description": "Describes what small numerical differences -- if any -- may be tolerated\nin the generated output when executing in different environments.\n\nA tensor element *output* is considered mismatched to the **test_tensor** if\nabs(*output* - **test_tensor**) > **absolute_tolerance** + **relative_tolerance** * abs(**test_tensor**).\n(Internally we call [numpy.testing.assert_allclose](https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html).)\n\nMotivation:\n For testing we can request the respective deep learning frameworks to be as\n reproducible as possible by setting seeds and chosing deterministic algorithms,\n but differences in operating systems, available hardware and installed drivers\n may still lead to numerical differences.",
"properties": {
"relative_tolerance": {
"default": 0.001,
"description": "Maximum relative tolerance of reproduced test tensor.",
"maximum": 0.01,
"minimum": 0,
"title": "Relative Tolerance",
"type": "number"
},
"absolute_tolerance": {
"default": 0.0001,
"description": "Maximum absolute tolerance of reproduced test tensor.",
"minimum": 0,
"title": "Absolute Tolerance",
"type": "number"
},
"mismatched_elements_per_million": {
"default": 100,
"description": "Maximum number of mismatched elements/pixels per million to tolerate.",
"maximum": 1000,
"minimum": 0,
"title": "Mismatched Elements Per Million",
"type": "integer"
},
"output_ids": {
"default": [],
"description": "Limits the output tensor IDs these reproducibility details apply to.",
"items": {
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"title": "Output Ids",
"type": "array"
},
"weights_formats": {
"default": [],
"description": "Limits the weights formats these details apply to.",
"items": {
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
"title": "Weights Formats",
"type": "array"
}
},
"title": "model.v0_5.ReproducibilityTolerance",
"type": "object"
}
},
"additionalProperties": true,
"properties": {
"reproducibility_tolerance": {
"default": [],
"description": "Tolerances to allow when reproducing the model's test outputs\nfrom the model's test inputs.\nOnly the first entry matching tensor id and weights format is considered.",
"items": {
"$ref": "#/$defs/ReproducibilityTolerance"
},
"title": "Reproducibility Tolerance",
"type": "array"
}
},
"title": "model.v0_5.BioimageioConfig",
"type": "object"
}
Fields:
-
reproducibility_tolerance(Sequence[ReproducibilityTolerance])
reproducibility_tolerance
pydantic-field
¤
reproducibility_tolerance: Sequence[
ReproducibilityTolerance
] = ()
Tolerances to allow when reproducing the model's test outputs from the model's test inputs. Only the first entry matching tensor id and weights format is considered.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
CallableFromDepencency
¤
Bases: ValidatedStringWithInnerNode[CallableFromDepencencyNode]
flowchart TD
bioimageio.spec.model.v0_5.CallableFromDepencency[CallableFromDepencency]
bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode[ValidatedStringWithInnerNode]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode --> bioimageio.spec.model.v0_5.CallableFromDepencency
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode
click bioimageio.spec.model.v0_5.CallableFromDepencency href "" "bioimageio.spec.model.v0_5.CallableFromDepencency"
click bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode href "" "bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
callable_name |
The callable Python identifier implemented in module module_name.
|
module_name |
The Python module that implements callable_name.
|
root_model |
TYPE:
|
callable_name
property
¤
callable_name
The callable Python identifier implemented in module module_name.
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
ChannelAxis
pydantic-model
¤
Bases: AxisBase
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"id": {
"default": "channel",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "channel",
"title": "Type",
"type": "string"
},
"channel_names": {
"items": {
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"minItems": 1,
"title": "Channel Names",
"type": "array"
}
},
"required": [
"type",
"channel_names"
],
"title": "model.v0_5.ChannelAxis",
"type": "object"
}
Fields:
-
description(str) -
type(Literal['channel']) -
id(NonBatchAxisId) -
channel_names(NotEmpty[List[Identifier]])
description
pydantic-field
¤
description: str = ''
A short description of this axis beyond its type and id.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
CiteEntry
pydantic-model
¤
Bases: Node
A citation that should be referenced in work using this resource.
Show JSON schema:
{
"additionalProperties": false,
"description": "A citation that should be referenced in work using this resource.",
"properties": {
"text": {
"description": "free text description",
"title": "Text",
"type": "string"
},
"doi": {
"anyOf": [
{
"description": "A digital object identifier, see https://www.doi.org/",
"pattern": "^10\\.[0-9]{4}.+$",
"title": "Doi",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details.\nNote:\n Either **doi** or **url** have to be specified.",
"title": "Doi"
},
"url": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to cite (preferably specify a **doi** instead/also).\nNote:\n Either **doi** or **url** have to be specified.",
"title": "Url"
}
},
"required": [
"text"
],
"title": "generic.v0_3.CiteEntry",
"type": "object"
}
Fields:
Validators:
-
_check_doi_or_url
doi
pydantic-field
¤
doi: Optional[Doi] = None
A digital object identifier (DOI) is the prefered citation reference. See https://www.doi.org/ for details. Note: Either doi or url have to be specified.
url
pydantic-field
¤
url: Optional[HttpUrl] = None
URL to cite (preferably specify a doi instead/also). Note: Either doi or url have to be specified.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
ClipDescr
pydantic-model
¤
Bases: ProcessingDescrBase
Set tensor values below min to min and above max to max.
See ScaleRangeDescr for examples.
Show JSON schema:
{
"$defs": {
"ClipKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ClipDescr`",
"properties": {
"min": {
"description": "minimum value for clipping",
"title": "Min",
"type": "number"
},
"max": {
"description": "maximum value for clipping",
"title": "Max",
"type": "number"
}
},
"required": [
"min",
"max"
],
"title": "model.v0_4.ClipKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Set tensor values below min to min and above max to max.\n\nSee `ScaleRangeDescr` for examples.",
"properties": {
"id": {
"const": "clip",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ClipKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ClipDescr",
"type": "object"
}
Fields:
-
id(Literal['clip']) -
kwargs(ClipKwargs)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
ClipKwargs
pydantic-model
¤
Bases: ProcessingKwargs
key word arguments for ClipDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for `ClipDescr`",
"properties": {
"min": {
"description": "minimum value for clipping",
"title": "Min",
"type": "number"
},
"max": {
"description": "maximum value for clipping",
"title": "Max",
"type": "number"
}
},
"required": [
"min",
"max"
],
"title": "model.v0_4.ClipKwargs",
"type": "object"
}
Fields:
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
425 426 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
419 420 421 422 423 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
416 417 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
Config
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"$defs": {
"BioimageioConfig": {
"additionalProperties": true,
"properties": {
"reproducibility_tolerance": {
"default": [],
"description": "Tolerances to allow when reproducing the model's test outputs\nfrom the model's test inputs.\nOnly the first entry matching tensor id and weights format is considered.",
"items": {
"$ref": "#/$defs/ReproducibilityTolerance"
},
"title": "Reproducibility Tolerance",
"type": "array"
}
},
"title": "model.v0_5.BioimageioConfig",
"type": "object"
},
"ReproducibilityTolerance": {
"additionalProperties": true,
"description": "Describes what small numerical differences -- if any -- may be tolerated\nin the generated output when executing in different environments.\n\nA tensor element *output* is considered mismatched to the **test_tensor** if\nabs(*output* - **test_tensor**) > **absolute_tolerance** + **relative_tolerance** * abs(**test_tensor**).\n(Internally we call [numpy.testing.assert_allclose](https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html).)\n\nMotivation:\n For testing we can request the respective deep learning frameworks to be as\n reproducible as possible by setting seeds and chosing deterministic algorithms,\n but differences in operating systems, available hardware and installed drivers\n may still lead to numerical differences.",
"properties": {
"relative_tolerance": {
"default": 0.001,
"description": "Maximum relative tolerance of reproduced test tensor.",
"maximum": 0.01,
"minimum": 0,
"title": "Relative Tolerance",
"type": "number"
},
"absolute_tolerance": {
"default": 0.0001,
"description": "Maximum absolute tolerance of reproduced test tensor.",
"minimum": 0,
"title": "Absolute Tolerance",
"type": "number"
},
"mismatched_elements_per_million": {
"default": 100,
"description": "Maximum number of mismatched elements/pixels per million to tolerate.",
"maximum": 1000,
"minimum": 0,
"title": "Mismatched Elements Per Million",
"type": "integer"
},
"output_ids": {
"default": [],
"description": "Limits the output tensor IDs these reproducibility details apply to.",
"items": {
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"title": "Output Ids",
"type": "array"
},
"weights_formats": {
"default": [],
"description": "Limits the weights formats these details apply to.",
"items": {
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
"title": "Weights Formats",
"type": "array"
}
},
"title": "model.v0_5.ReproducibilityTolerance",
"type": "object"
}
},
"additionalProperties": true,
"properties": {
"bioimageio": {
"$ref": "#/$defs/BioimageioConfig"
}
},
"title": "model.v0_5.Config",
"type": "object"
}
Fields:
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
Converter
¤
Converter(src: Type[SRC], tgt: Type[TGT])
Bases: Generic[SRC, TGT, Unpack[CArgs]], ABC
flowchart TD
bioimageio.spec.model.v0_5.Converter[Converter]
click bioimageio.spec.model.v0_5.Converter href "" "bioimageio.spec.model.v0_5.Converter"
| METHOD | DESCRIPTION |
|---|---|
convert |
convert |
convert_as_dict |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
src |
TYPE:
|
tgt |
TYPE:
|
Source code in src/bioimageio/spec/_internal/node_converter.py
79 80 81 82 | |
convert
¤
convert(source: SRC, /, *args: Unpack[CArgs]) -> TGT
convert source node
| PARAMETER | DESCRIPTION |
|---|---|
|
A bioimageio description node
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
conversion failed |
Source code in src/bioimageio/spec/_internal/node_converter.py
92 93 94 95 96 97 98 99 100 101 102 | |
convert_as_dict
¤
convert_as_dict(
source: SRC, /, *args: Unpack[CArgs]
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node_converter.py
104 105 | |
DataDependentSize
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"min": {
"default": 1,
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"max": {
"anyOf": [
{
"exclusiveMinimum": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Max"
}
},
"title": "model.v0_5.DataDependentSize",
"type": "object"
}
Fields:
Validators:
-
_validate_max_gt_min
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
validate_size
¤
validate_size(size: int) -> int
Source code in src/bioimageio/spec/model/v0_5.py
344 345 346 347 348 349 350 351 | |
DatasetDescr
pydantic-model
¤
Bases: GenericDescrBase
A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage processing.
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"BadgeDescr": {
"additionalProperties": false,
"description": "A custom badge",
"properties": {
"label": {
"description": "badge label to display on hover",
"examples": [
"Open in Colab"
],
"title": "Label",
"type": "string"
},
"icon": {
"anyOf": [
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "badge icon (included in bioimage.io package if not a URL)",
"examples": [
"https://colab.research.google.com/assets/colab-badge.svg"
],
"title": "Icon"
},
"url": {
"description": "target URL",
"examples": [
"https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb"
],
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
}
},
"required": [
"label",
"url"
],
"title": "generic.v0_2.BadgeDescr",
"type": "object"
},
"BioimageioConfig": {
"additionalProperties": true,
"description": "bioimage.io internal metadata.",
"properties": {},
"title": "generic.v0_3.BioimageioConfig",
"type": "object"
},
"CiteEntry": {
"additionalProperties": false,
"description": "A citation that should be referenced in work using this resource.",
"properties": {
"text": {
"description": "free text description",
"title": "Text",
"type": "string"
},
"doi": {
"anyOf": [
{
"description": "A digital object identifier, see https://www.doi.org/",
"pattern": "^10\\.[0-9]{4}.+$",
"title": "Doi",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details.\nNote:\n Either **doi** or **url** have to be specified.",
"title": "Doi"
},
"url": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to cite (preferably specify a **doi** instead/also).\nNote:\n Either **doi** or **url** have to be specified.",
"title": "Url"
}
},
"required": [
"text"
],
"title": "generic.v0_3.CiteEntry",
"type": "object"
},
"Config": {
"additionalProperties": true,
"description": "A place to store additional metadata (often tool specific).\n\nSuch additional metadata is typically set programmatically by the respective tool\nor by people with specific insights into the tool.\nIf you want to store additional metadata that does not match any of the other\nfields, think of a key unlikely to collide with anyone elses use-case/tool and save\nit here.\n\nPlease consider creating [an issue in the bioimageio.spec repository](https://github.com/bioimage-io/spec-bioimage-io/issues/new?template=Blank+issue)\nif you are not sure if an existing field could cover your use case\nor if you think such a field should exist.",
"properties": {
"bioimageio": {
"$ref": "#/$defs/BioimageioConfig"
}
},
"title": "generic.v0_3.Config",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"Maintainer": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Name"
},
"github_user": {
"title": "Github User",
"type": "string"
}
},
"required": [
"github_user"
],
"title": "generic.v0_3.Maintainer",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"Uploader": {
"additionalProperties": false,
"properties": {
"email": {
"description": "Email",
"format": "email",
"title": "Email",
"type": "string"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "name",
"title": "Name"
}
},
"required": [
"email"
],
"title": "generic.v0_2.Uploader",
"type": "object"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"description": "A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage\nprocessing.",
"properties": {
"name": {
"description": "A human-friendly name of the resource description.\nMay only contains letters, digits, underscore, minus, parentheses and spaces.",
"maxLength": 128,
"minLength": 5,
"title": "Name",
"type": "string"
},
"description": {
"default": "",
"description": "A string containing a brief description.",
"maxLength": 1024,
"title": "Description",
"type": "string"
},
"covers": {
"description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1 or 1:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg')",
"examples": [
[
"cover.png"
]
],
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Covers",
"type": "array"
},
"id_emoji": {
"anyOf": [
{
"examples": [
"\ud83e\udd88",
"\ud83e\udda5"
],
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "UTF-8 emoji for display alongside the `id`.",
"title": "Id Emoji"
},
"authors": {
"description": "The authors are the creators of this resource description and the primary points of contact.",
"items": {
"$ref": "#/$defs/Author"
},
"title": "Authors",
"type": "array"
},
"attachments": {
"description": "file attachments",
"items": {
"$ref": "#/$defs/FileDescr"
},
"title": "Attachments",
"type": "array"
},
"cite": {
"description": "citations",
"items": {
"$ref": "#/$defs/CiteEntry"
},
"title": "Cite",
"type": "array"
},
"license": {
"anyOf": [
{
"enum": [
"0BSD",
"AAL",
"Abstyles",
"AdaCore-doc",
"Adobe-2006",
"Adobe-Display-PostScript",
"Adobe-Glyph",
"Adobe-Utopia",
"ADSL",
"AFL-1.1",
"AFL-1.2",
"AFL-2.0",
"AFL-2.1",
"AFL-3.0",
"Afmparse",
"AGPL-1.0-only",
"AGPL-1.0-or-later",
"AGPL-3.0-only",
"AGPL-3.0-or-later",
"Aladdin",
"AMDPLPA",
"AML",
"AML-glslang",
"AMPAS",
"ANTLR-PD",
"ANTLR-PD-fallback",
"Apache-1.0",
"Apache-1.1",
"Apache-2.0",
"APAFML",
"APL-1.0",
"App-s2p",
"APSL-1.0",
"APSL-1.1",
"APSL-1.2",
"APSL-2.0",
"Arphic-1999",
"Artistic-1.0",
"Artistic-1.0-cl8",
"Artistic-1.0-Perl",
"Artistic-2.0",
"ASWF-Digital-Assets-1.0",
"ASWF-Digital-Assets-1.1",
"Baekmuk",
"Bahyph",
"Barr",
"bcrypt-Solar-Designer",
"Beerware",
"Bitstream-Charter",
"Bitstream-Vera",
"BitTorrent-1.0",
"BitTorrent-1.1",
"blessing",
"BlueOak-1.0.0",
"Boehm-GC",
"Borceux",
"Brian-Gladman-2-Clause",
"Brian-Gladman-3-Clause",
"BSD-1-Clause",
"BSD-2-Clause",
"BSD-2-Clause-Darwin",
"BSD-2-Clause-Patent",
"BSD-2-Clause-Views",
"BSD-3-Clause",
"BSD-3-Clause-acpica",
"BSD-3-Clause-Attribution",
"BSD-3-Clause-Clear",
"BSD-3-Clause-flex",
"BSD-3-Clause-HP",
"BSD-3-Clause-LBNL",
"BSD-3-Clause-Modification",
"BSD-3-Clause-No-Military-License",
"BSD-3-Clause-No-Nuclear-License",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-No-Nuclear-Warranty",
"BSD-3-Clause-Open-MPI",
"BSD-3-Clause-Sun",
"BSD-4-Clause",
"BSD-4-Clause-Shortened",
"BSD-4-Clause-UC",
"BSD-4.3RENO",
"BSD-4.3TAHOE",
"BSD-Advertising-Acknowledgement",
"BSD-Attribution-HPND-disclaimer",
"BSD-Inferno-Nettverk",
"BSD-Protection",
"BSD-Source-beginning-file",
"BSD-Source-Code",
"BSD-Systemics",
"BSD-Systemics-W3Works",
"BSL-1.0",
"BUSL-1.1",
"bzip2-1.0.6",
"C-UDA-1.0",
"CAL-1.0",
"CAL-1.0-Combined-Work-Exception",
"Caldera",
"Caldera-no-preamble",
"CATOSL-1.1",
"CC-BY-1.0",
"CC-BY-2.0",
"CC-BY-2.5",
"CC-BY-2.5-AU",
"CC-BY-3.0",
"CC-BY-3.0-AT",
"CC-BY-3.0-AU",
"CC-BY-3.0-DE",
"CC-BY-3.0-IGO",
"CC-BY-3.0-NL",
"CC-BY-3.0-US",
"CC-BY-4.0",
"CC-BY-NC-1.0",
"CC-BY-NC-2.0",
"CC-BY-NC-2.5",
"CC-BY-NC-3.0",
"CC-BY-NC-3.0-DE",
"CC-BY-NC-4.0",
"CC-BY-NC-ND-1.0",
"CC-BY-NC-ND-2.0",
"CC-BY-NC-ND-2.5",
"CC-BY-NC-ND-3.0",
"CC-BY-NC-ND-3.0-DE",
"CC-BY-NC-ND-3.0-IGO",
"CC-BY-NC-ND-4.0",
"CC-BY-NC-SA-1.0",
"CC-BY-NC-SA-2.0",
"CC-BY-NC-SA-2.0-DE",
"CC-BY-NC-SA-2.0-FR",
"CC-BY-NC-SA-2.0-UK",
"CC-BY-NC-SA-2.5",
"CC-BY-NC-SA-3.0",
"CC-BY-NC-SA-3.0-DE",
"CC-BY-NC-SA-3.0-IGO",
"CC-BY-NC-SA-4.0",
"CC-BY-ND-1.0",
"CC-BY-ND-2.0",
"CC-BY-ND-2.5",
"CC-BY-ND-3.0",
"CC-BY-ND-3.0-DE",
"CC-BY-ND-4.0",
"CC-BY-SA-1.0",
"CC-BY-SA-2.0",
"CC-BY-SA-2.0-UK",
"CC-BY-SA-2.1-JP",
"CC-BY-SA-2.5",
"CC-BY-SA-3.0",
"CC-BY-SA-3.0-AT",
"CC-BY-SA-3.0-DE",
"CC-BY-SA-3.0-IGO",
"CC-BY-SA-4.0",
"CC-PDDC",
"CC0-1.0",
"CDDL-1.0",
"CDDL-1.1",
"CDL-1.0",
"CDLA-Permissive-1.0",
"CDLA-Permissive-2.0",
"CDLA-Sharing-1.0",
"CECILL-1.0",
"CECILL-1.1",
"CECILL-2.0",
"CECILL-2.1",
"CECILL-B",
"CECILL-C",
"CERN-OHL-1.1",
"CERN-OHL-1.2",
"CERN-OHL-P-2.0",
"CERN-OHL-S-2.0",
"CERN-OHL-W-2.0",
"CFITSIO",
"check-cvs",
"checkmk",
"ClArtistic",
"Clips",
"CMU-Mach",
"CMU-Mach-nodoc",
"CNRI-Jython",
"CNRI-Python",
"CNRI-Python-GPL-Compatible",
"COIL-1.0",
"Community-Spec-1.0",
"Condor-1.1",
"copyleft-next-0.3.0",
"copyleft-next-0.3.1",
"Cornell-Lossless-JPEG",
"CPAL-1.0",
"CPL-1.0",
"CPOL-1.02",
"Cronyx",
"Crossword",
"CrystalStacker",
"CUA-OPL-1.0",
"Cube",
"curl",
"D-FSL-1.0",
"DEC-3-Clause",
"diffmark",
"DL-DE-BY-2.0",
"DL-DE-ZERO-2.0",
"DOC",
"Dotseqn",
"DRL-1.0",
"DRL-1.1",
"DSDP",
"dtoa",
"dvipdfm",
"ECL-1.0",
"ECL-2.0",
"EFL-1.0",
"EFL-2.0",
"eGenix",
"Elastic-2.0",
"Entessa",
"EPICS",
"EPL-1.0",
"EPL-2.0",
"ErlPL-1.1",
"etalab-2.0",
"EUDatagrid",
"EUPL-1.0",
"EUPL-1.1",
"EUPL-1.2",
"Eurosym",
"Fair",
"FBM",
"FDK-AAC",
"Ferguson-Twofish",
"Frameworx-1.0",
"FreeBSD-DOC",
"FreeImage",
"FSFAP",
"FSFAP-no-warranty-disclaimer",
"FSFUL",
"FSFULLR",
"FSFULLRWD",
"FTL",
"Furuseth",
"fwlw",
"GCR-docs",
"GD",
"GFDL-1.1-invariants-only",
"GFDL-1.1-invariants-or-later",
"GFDL-1.1-no-invariants-only",
"GFDL-1.1-no-invariants-or-later",
"GFDL-1.1-only",
"GFDL-1.1-or-later",
"GFDL-1.2-invariants-only",
"GFDL-1.2-invariants-or-later",
"GFDL-1.2-no-invariants-only",
"GFDL-1.2-no-invariants-or-later",
"GFDL-1.2-only",
"GFDL-1.2-or-later",
"GFDL-1.3-invariants-only",
"GFDL-1.3-invariants-or-later",
"GFDL-1.3-no-invariants-only",
"GFDL-1.3-no-invariants-or-later",
"GFDL-1.3-only",
"GFDL-1.3-or-later",
"Giftware",
"GL2PS",
"Glide",
"Glulxe",
"GLWTPL",
"gnuplot",
"GPL-1.0-only",
"GPL-1.0-or-later",
"GPL-2.0-only",
"GPL-2.0-or-later",
"GPL-3.0-only",
"GPL-3.0-or-later",
"Graphics-Gems",
"gSOAP-1.3b",
"gtkbook",
"HaskellReport",
"hdparm",
"Hippocratic-2.1",
"HP-1986",
"HP-1989",
"HPND",
"HPND-DEC",
"HPND-doc",
"HPND-doc-sell",
"HPND-export-US",
"HPND-export-US-modify",
"HPND-Fenneberg-Livingston",
"HPND-INRIA-IMAG",
"HPND-Kevlin-Henney",
"HPND-Markus-Kuhn",
"HPND-MIT-disclaimer",
"HPND-Pbmplus",
"HPND-sell-MIT-disclaimer-xserver",
"HPND-sell-regexpr",
"HPND-sell-variant",
"HPND-sell-variant-MIT-disclaimer",
"HPND-UC",
"HTMLTIDY",
"IBM-pibs",
"ICU",
"IEC-Code-Components-EULA",
"IJG",
"IJG-short",
"ImageMagick",
"iMatix",
"Imlib2",
"Info-ZIP",
"Inner-Net-2.0",
"Intel",
"Intel-ACPI",
"Interbase-1.0",
"IPA",
"IPL-1.0",
"ISC",
"ISC-Veillard",
"Jam",
"JasPer-2.0",
"JPL-image",
"JPNIC",
"JSON",
"Kastrup",
"Kazlib",
"Knuth-CTAN",
"LAL-1.2",
"LAL-1.3",
"Latex2e",
"Latex2e-translated-notice",
"Leptonica",
"LGPL-2.0-only",
"LGPL-2.0-or-later",
"LGPL-2.1-only",
"LGPL-2.1-or-later",
"LGPL-3.0-only",
"LGPL-3.0-or-later",
"LGPLLR",
"Libpng",
"libpng-2.0",
"libselinux-1.0",
"libtiff",
"libutil-David-Nugent",
"LiLiQ-P-1.1",
"LiLiQ-R-1.1",
"LiLiQ-Rplus-1.1",
"Linux-man-pages-1-para",
"Linux-man-pages-copyleft",
"Linux-man-pages-copyleft-2-para",
"Linux-man-pages-copyleft-var",
"Linux-OpenIB",
"LOOP",
"LPD-document",
"LPL-1.0",
"LPL-1.02",
"LPPL-1.0",
"LPPL-1.1",
"LPPL-1.2",
"LPPL-1.3a",
"LPPL-1.3c",
"lsof",
"Lucida-Bitmap-Fonts",
"LZMA-SDK-9.11-to-9.20",
"LZMA-SDK-9.22",
"Mackerras-3-Clause",
"Mackerras-3-Clause-acknowledgment",
"magaz",
"mailprio",
"MakeIndex",
"Martin-Birgmeier",
"McPhee-slideshow",
"metamail",
"Minpack",
"MirOS",
"MIT",
"MIT-0",
"MIT-advertising",
"MIT-CMU",
"MIT-enna",
"MIT-feh",
"MIT-Festival",
"MIT-Modern-Variant",
"MIT-open-group",
"MIT-testregex",
"MIT-Wu",
"MITNFA",
"MMIXware",
"Motosoto",
"MPEG-SSG",
"mpi-permissive",
"mpich2",
"MPL-1.0",
"MPL-1.1",
"MPL-2.0",
"MPL-2.0-no-copyleft-exception",
"mplus",
"MS-LPL",
"MS-PL",
"MS-RL",
"MTLL",
"MulanPSL-1.0",
"MulanPSL-2.0",
"Multics",
"Mup",
"NAIST-2003",
"NASA-1.3",
"Naumen",
"NBPL-1.0",
"NCGL-UK-2.0",
"NCSA",
"Net-SNMP",
"NetCDF",
"Newsletr",
"NGPL",
"NICTA-1.0",
"NIST-PD",
"NIST-PD-fallback",
"NIST-Software",
"NLOD-1.0",
"NLOD-2.0",
"NLPL",
"Nokia",
"NOSL",
"Noweb",
"NPL-1.0",
"NPL-1.1",
"NPOSL-3.0",
"NRL",
"NTP",
"NTP-0",
"O-UDA-1.0",
"OCCT-PL",
"OCLC-2.0",
"ODbL-1.0",
"ODC-By-1.0",
"OFFIS",
"OFL-1.0",
"OFL-1.0-no-RFN",
"OFL-1.0-RFN",
"OFL-1.1",
"OFL-1.1-no-RFN",
"OFL-1.1-RFN",
"OGC-1.0",
"OGDL-Taiwan-1.0",
"OGL-Canada-2.0",
"OGL-UK-1.0",
"OGL-UK-2.0",
"OGL-UK-3.0",
"OGTSL",
"OLDAP-1.1",
"OLDAP-1.2",
"OLDAP-1.3",
"OLDAP-1.4",
"OLDAP-2.0",
"OLDAP-2.0.1",
"OLDAP-2.1",
"OLDAP-2.2",
"OLDAP-2.2.1",
"OLDAP-2.2.2",
"OLDAP-2.3",
"OLDAP-2.4",
"OLDAP-2.5",
"OLDAP-2.6",
"OLDAP-2.7",
"OLDAP-2.8",
"OLFL-1.3",
"OML",
"OpenPBS-2.3",
"OpenSSL",
"OpenSSL-standalone",
"OpenVision",
"OPL-1.0",
"OPL-UK-3.0",
"OPUBL-1.0",
"OSET-PL-2.1",
"OSL-1.0",
"OSL-1.1",
"OSL-2.0",
"OSL-2.1",
"OSL-3.0",
"PADL",
"Parity-6.0.0",
"Parity-7.0.0",
"PDDL-1.0",
"PHP-3.0",
"PHP-3.01",
"Pixar",
"Plexus",
"pnmstitch",
"PolyForm-Noncommercial-1.0.0",
"PolyForm-Small-Business-1.0.0",
"PostgreSQL",
"PSF-2.0",
"psfrag",
"psutils",
"Python-2.0",
"Python-2.0.1",
"python-ldap",
"Qhull",
"QPL-1.0",
"QPL-1.0-INRIA-2004",
"radvd",
"Rdisc",
"RHeCos-1.1",
"RPL-1.1",
"RPL-1.5",
"RPSL-1.0",
"RSA-MD",
"RSCPL",
"Ruby",
"SAX-PD",
"SAX-PD-2.0",
"Saxpath",
"SCEA",
"SchemeReport",
"Sendmail",
"Sendmail-8.23",
"SGI-B-1.0",
"SGI-B-1.1",
"SGI-B-2.0",
"SGI-OpenGL",
"SGP4",
"SHL-0.5",
"SHL-0.51",
"SimPL-2.0",
"SISSL",
"SISSL-1.2",
"SL",
"Sleepycat",
"SMLNJ",
"SMPPL",
"SNIA",
"snprintf",
"softSurfer",
"Soundex",
"Spencer-86",
"Spencer-94",
"Spencer-99",
"SPL-1.0",
"ssh-keyscan",
"SSH-OpenSSH",
"SSH-short",
"SSLeay-standalone",
"SSPL-1.0",
"SugarCRM-1.1.3",
"Sun-PPP",
"SunPro",
"SWL",
"swrule",
"Symlinks",
"TAPR-OHL-1.0",
"TCL",
"TCP-wrappers",
"TermReadKey",
"TGPPL-1.0",
"TMate",
"TORQUE-1.1",
"TOSL",
"TPDL",
"TPL-1.0",
"TTWL",
"TTYP0",
"TU-Berlin-1.0",
"TU-Berlin-2.0",
"UCAR",
"UCL-1.0",
"ulem",
"UMich-Merit",
"Unicode-3.0",
"Unicode-DFS-2015",
"Unicode-DFS-2016",
"Unicode-TOU",
"UnixCrypt",
"Unlicense",
"UPL-1.0",
"URT-RLE",
"Vim",
"VOSTROM",
"VSL-1.0",
"W3C",
"W3C-19980720",
"W3C-20150513",
"w3m",
"Watcom-1.0",
"Widget-Workshop",
"Wsuipa",
"WTFPL",
"X11",
"X11-distribute-modifications-variant",
"Xdebug-1.03",
"Xerox",
"Xfig",
"XFree86-1.1",
"xinetd",
"xkeyboard-config-Zinoviev",
"xlock",
"Xnet",
"xpp",
"XSkat",
"YPL-1.0",
"YPL-1.1",
"Zed",
"Zeeff",
"Zend-2.0",
"Zimbra-1.3",
"Zimbra-1.4",
"Zlib",
"zlib-acknowledgement",
"ZPL-1.1",
"ZPL-2.0",
"ZPL-2.1"
],
"title": "LicenseId",
"type": "string"
},
{
"enum": [
"AGPL-1.0",
"AGPL-3.0",
"BSD-2-Clause-FreeBSD",
"BSD-2-Clause-NetBSD",
"bzip2-1.0.5",
"eCos-2.0",
"GFDL-1.1",
"GFDL-1.2",
"GFDL-1.3",
"GPL-1.0",
"GPL-1.0+",
"GPL-2.0",
"GPL-2.0+",
"GPL-2.0-with-autoconf-exception",
"GPL-2.0-with-bison-exception",
"GPL-2.0-with-classpath-exception",
"GPL-2.0-with-font-exception",
"GPL-2.0-with-GCC-exception",
"GPL-3.0",
"GPL-3.0+",
"GPL-3.0-with-autoconf-exception",
"GPL-3.0-with-GCC-exception",
"LGPL-2.0",
"LGPL-2.0+",
"LGPL-2.1",
"LGPL-2.1+",
"LGPL-3.0",
"LGPL-3.0+",
"Nunit",
"StandardML-NJ",
"wxWindows"
],
"title": "DeprecatedLicenseId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose)\nto discuss your intentions with the community.",
"examples": [
"CC0-1.0",
"MIT",
"BSD-2-Clause"
],
"title": "License"
},
"git_repo": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A URL to the Git repository where the resource is being developed.",
"examples": [
"https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
],
"title": "Git Repo"
},
"icon": {
"anyOf": [
{
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An icon for illustration, e.g. on bioimage.io",
"title": "Icon"
},
"links": {
"description": "IDs of other bioimage.io resources",
"examples": [
[
"ilastik/ilastik",
"deepimagej/deepimagej",
"zero/notebook_u-net_3d_zerocostdl4mic"
]
],
"items": {
"type": "string"
},
"title": "Links",
"type": "array"
},
"uploader": {
"anyOf": [
{
"$ref": "#/$defs/Uploader"
},
{
"type": "null"
}
],
"default": null,
"description": "The person who uploaded the model (e.g. to bioimage.io)"
},
"maintainers": {
"description": "Maintainers of this resource.\nIf not specified, `authors` are maintainers and at least some of them has to specify their `github_user` name",
"items": {
"$ref": "#/$defs/Maintainer"
},
"title": "Maintainers",
"type": "array"
},
"tags": {
"description": "Associated tags",
"examples": [
[
"unet2d",
"pytorch",
"nucleus",
"segmentation",
"dsb2018"
]
],
"items": {
"type": "string"
},
"title": "Tags",
"type": "array"
},
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the resource following SemVer 2.0."
},
"version_comment": {
"anyOf": [
{
"maxLength": 512,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A comment on the version of the resource.",
"title": "Version Comment"
},
"format_version": {
"const": "0.3.0",
"description": "The **format** version of this resource specification",
"title": "Format Version",
"type": "string"
},
"documentation": {
"anyOf": [
{
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"examples": [
"https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
"README.md"
]
},
{
"type": "null"
}
],
"default": null,
"description": "URL or relative path to a markdown file encoded in UTF-8 with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.",
"title": "Documentation"
},
"badges": {
"description": "badges associated with this resource",
"items": {
"$ref": "#/$defs/BadgeDescr"
},
"title": "Badges",
"type": "array"
},
"config": {
"$ref": "#/$defs/Config",
"description": "A field for custom configuration that can contain any keys not present in the RDF spec.\nThis means you should not store, for example, a GitHub repo URL in `config` since there is a `git_repo` field.\nKeys in `config` may be very specific to a tool or consumer software. To avoid conflicting definitions,\nit is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,\nfor example:\n```yaml\nconfig:\n giraffe_neckometer: # here is the domain name\n length: 3837283\n address:\n home: zoo\n imagej: # config specific to ImageJ\n macro_dir: path/to/macro/file\n```\nIf possible, please use [`snake_case`](https://en.wikipedia.org/wiki/Snake_case) for keys in `config`.\nYou may want to list linked files additionally under `attachments` to include them when packaging a resource.\n(Packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains\nan altered rdf.yaml file with local references to the downloaded files.)"
},
"type": {
"const": "dataset",
"title": "Type",
"type": "string"
},
"id": {
"anyOf": [
{
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
"title": "Id"
},
"parent": {
"anyOf": [
{
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The description from which this one is derived",
"title": "Parent"
},
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "\"URL to the source of the dataset.",
"title": "Source"
}
},
"required": [
"name",
"format_version",
"type"
],
"title": "dataset 0.3.0",
"type": "object"
}
Fields:
-
_validation_summary(Optional[ValidationSummary]) -
_root(Union[RootHttpUrl, DirectoryPath, ZipFile]) -
_file_name(Optional[FileName]) -
name(str) -
description(FAIR[str]) -
covers(List[FileSource_cover]) -
id_emoji(Optional[str]) -
authors(FAIR[List[Author]]) -
attachments(List[FileDescr_]) -
cite(FAIR[List[CiteEntry]]) -
license(FAIR[Union[LicenseId, DeprecatedLicenseId, None]]) -
git_repo(Optional[HttpUrl]) -
icon(Union[str, FileSource_, None]) -
links(List[str]) -
uploader(Optional[Uploader]) -
maintainers(List[Maintainer]) -
tags(FAIR[List[str]]) -
version(Optional[Version]) -
version_comment(Optional[str]) -
format_version(Literal['0.3.0']) -
documentation(FAIR[Optional[FileSource_documentation]]) -
badges(List[BadgeDescr]) -
config(Config) -
type(Literal['dataset']) -
id(Optional[DatasetId]) -
parent(Optional[DatasetId]) -
source(FAIR[Optional[HttpUrl]])
Validators:
-
_convert
authors
pydantic-field
¤
The authors are the creators of this resource description and the primary points of contact.
config
pydantic-field
¤
config: Config
A field for custom configuration that can contain any keys not present in the RDF spec.
This means you should not store, for example, a GitHub repo URL in config since there is a git_repo field.
Keys in config may be very specific to a tool or consumer software. To avoid conflicting definitions,
it is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,
for example:
config:
giraffe_neckometer: # here is the domain name
length: 3837283
address:
home: zoo
imagej: # config specific to ImageJ
macro_dir: path/to/macro/file
snake_case for keys in config.
You may want to list linked files additionally under attachments to include them when packaging a resource.
(Packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains
an altered rdf.yaml file with local references to the downloaded files.)
documentation
pydantic-field
¤
documentation: FAIR[Optional[FileSource_documentation]] = (
None
)
URL or relative path to a markdown file encoded in UTF-8 with additional documentation.
The recommended documentation file name is README.md. An .md suffix is mandatory.
file_name
property
¤
file_name: Optional[FileName]
File name of the bioimageio.yaml file the description was loaded from.
git_repo
pydantic-field
¤
git_repo: Optional[HttpUrl] = None
A URL to the Git repository where the resource is being developed.
icon
pydantic-field
¤
icon: Union[str, FileSource_, None] = None
An icon for illustration, e.g. on bioimage.io
id
pydantic-field
¤
id: Optional[DatasetId] = None
bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.
implemented_format_version_tuple
class-attribute
¤
implemented_format_version_tuple: Tuple[int, int, int]
license
pydantic-field
¤
license: FAIR[
Union[LicenseId, DeprecatedLicenseId, None]
] = None
A SPDX license identifier. We do not support custom license beyond the SPDX license list, if you need that please open a GitHub issue to discuss your intentions with the community.
maintainers
pydantic-field
¤
maintainers: List[Maintainer]
Maintainers of this resource.
If not specified, authors are maintainers and at least some of them has to specify their github_user name
name
pydantic-field
¤
name: str
A human-friendly name of the resource description. May only contains letters, digits, underscore, minus, parentheses and spaces.
parent
pydantic-field
¤
parent: Optional[DatasetId] = None
The description from which this one is derived
root
property
¤
root: Union[RootHttpUrl, DirectoryPath, ZipFile]
The URL/Path prefix to resolve any relative paths with.
uploader
pydantic-field
¤
uploader: Optional[Uploader] = None
The person who uploaded the model (e.g. to bioimage.io)
version
pydantic-field
¤
version: Optional[Version] = None
The version of the resource following SemVer 2.0.
version_comment
pydantic-field
¤
version_comment: Optional[str] = None
A comment on the version of the resource.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any)
Source code in src/bioimageio/spec/_internal/common_nodes.py
199 200 201 202 203 204 205 206 207 208 209 210 211 | |
convert_from_old_format_wo_validation
classmethod
¤
convert_from_old_format_wo_validation(
data: BioimageioYamlContent,
) -> None
Convert metadata following an older format version to this classes' format without validating the result.
Source code in src/bioimageio/spec/generic/v0_3.py
449 450 451 452 453 454 | |
get_package_content
¤
get_package_content() -> Dict[
FileName, Union[FileDescr, BioimageioYamlContent]
]
Returns package content without creating the package.
Source code in src/bioimageio/spec/_internal/common_nodes.py
377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 | |
load
classmethod
¤
load(
data: BioimageioYamlContentView,
context: Optional[ValidationContext] = None,
) -> Union[Self, InvalidDescr]
factory method to create a resource description object
Source code in src/bioimageio/spec/_internal/common_nodes.py
213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
package
¤
package(
dest: Optional[
Union[ZipFile, IO[bytes], Path, str]
] = None,
) -> ZipFile
package the described resource as a zip archive
| PARAMETER | DESCRIPTION |
|---|---|
|
(path/bytes stream of) destination zipfile
TYPE:
|
Source code in src/bioimageio/spec/_internal/common_nodes.py
347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 | |
warn_about_tag_categories
classmethod
¤
warn_about_tag_categories(
value: List[str], info: ValidationInfo
) -> List[str]
Source code in src/bioimageio/spec/generic/v0_3.py
384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 | |
DatasetDescr02
pydantic-model
¤
Bases: GenericDescrBase
A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage processing.
Show JSON schema:
{
"$defs": {
"AttachmentsDescr": {
"additionalProperties": true,
"properties": {
"files": {
"description": "File attachments",
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Files",
"type": "array"
}
},
"title": "generic.v0_2.AttachmentsDescr",
"type": "object"
},
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_2.Author",
"type": "object"
},
"BadgeDescr": {
"additionalProperties": false,
"description": "A custom badge",
"properties": {
"label": {
"description": "badge label to display on hover",
"examples": [
"Open in Colab"
],
"title": "Label",
"type": "string"
},
"icon": {
"anyOf": [
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "badge icon (included in bioimage.io package if not a URL)",
"examples": [
"https://colab.research.google.com/assets/colab-badge.svg"
],
"title": "Icon"
},
"url": {
"description": "target URL",
"examples": [
"https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb"
],
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
}
},
"required": [
"label",
"url"
],
"title": "generic.v0_2.BadgeDescr",
"type": "object"
},
"CiteEntry": {
"additionalProperties": false,
"properties": {
"text": {
"description": "free text description",
"title": "Text",
"type": "string"
},
"doi": {
"anyOf": [
{
"description": "A digital object identifier, see https://www.doi.org/",
"pattern": "^10\\.[0-9]{4}.+$",
"title": "Doi",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details. (alternatively specify `url`)",
"title": "Doi"
},
"url": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to cite (preferably specify a `doi` instead)",
"title": "Url"
}
},
"required": [
"text"
],
"title": "generic.v0_2.CiteEntry",
"type": "object"
},
"Maintainer": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Name"
},
"github_user": {
"title": "Github User",
"type": "string"
}
},
"required": [
"github_user"
],
"title": "generic.v0_2.Maintainer",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"Uploader": {
"additionalProperties": false,
"properties": {
"email": {
"description": "Email",
"format": "email",
"title": "Email",
"type": "string"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "name",
"title": "Name"
}
},
"required": [
"email"
],
"title": "generic.v0_2.Uploader",
"type": "object"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
},
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
}
},
"additionalProperties": false,
"description": "A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage\nprocessing.",
"properties": {
"name": {
"description": "A human-friendly name of the resource description",
"minLength": 1,
"title": "Name",
"type": "string"
},
"description": {
"title": "Description",
"type": "string"
},
"covers": {
"description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg', '.tif', '.tiff')",
"examples": [
[
"cover.png"
]
],
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Covers",
"type": "array"
},
"id_emoji": {
"anyOf": [
{
"examples": [
"\ud83e\udd88",
"\ud83e\udda5"
],
"maxLength": 1,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "UTF-8 emoji for display alongside the `id`.",
"title": "Id Emoji"
},
"authors": {
"description": "The authors are the creators of the RDF and the primary points of contact.",
"items": {
"$ref": "#/$defs/Author"
},
"title": "Authors",
"type": "array"
},
"attachments": {
"anyOf": [
{
"$ref": "#/$defs/AttachmentsDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "file and other attachments"
},
"cite": {
"description": "citations",
"items": {
"$ref": "#/$defs/CiteEntry"
},
"title": "Cite",
"type": "array"
},
"config": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "A field for custom configuration that can contain any keys not present in the RDF spec.\nThis means you should not store, for example, a github repo URL in `config` since we already have the\n`git_repo` field defined in the spec.\nKeys in `config` may be very specific to a tool or consumer software. To avoid conflicting definitions,\nit is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,\nfor example:\n```yaml\nconfig:\n bioimageio: # here is the domain name\n my_custom_key: 3837283\n another_key:\n nested: value\n imagej: # config specific to ImageJ\n macro_dir: path/to/macro/file\n```\nIf possible, please use [`snake_case`](https://en.wikipedia.org/wiki/Snake_case) for keys in `config`.\nYou may want to list linked files additionally under `attachments` to include them when packaging a resource\n(packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains\nan altered rdf.yaml file with local references to the downloaded files)",
"examples": [
{
"bioimageio": {
"another_key": {
"nested": "value"
},
"my_custom_key": 3837283
},
"imagej": {
"macro_dir": "path/to/macro/file"
}
}
],
"title": "Config",
"type": "object"
},
"download_url": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to download the resource from (deprecated)",
"title": "Download Url"
},
"git_repo": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A URL to the Git repository where the resource is being developed.",
"examples": [
"https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
],
"title": "Git Repo"
},
"icon": {
"anyOf": [
{
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An icon for illustration",
"title": "Icon"
},
"links": {
"description": "IDs of other bioimage.io resources",
"examples": [
[
"ilastik/ilastik",
"deepimagej/deepimagej",
"zero/notebook_u-net_3d_zerocostdl4mic"
]
],
"items": {
"type": "string"
},
"title": "Links",
"type": "array"
},
"uploader": {
"anyOf": [
{
"$ref": "#/$defs/Uploader"
},
{
"type": "null"
}
],
"default": null,
"description": "The person who uploaded the model (e.g. to bioimage.io)"
},
"maintainers": {
"description": "Maintainers of this resource.\nIf not specified `authors` are maintainers and at least some of them should specify their `github_user` name",
"items": {
"$ref": "#/$defs/Maintainer"
},
"title": "Maintainers",
"type": "array"
},
"rdf_source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Resource description file (RDF) source; used to keep track of where an rdf.yaml was loaded from.\nDo not set this field in a YAML file.",
"title": "Rdf Source"
},
"tags": {
"description": "Associated tags",
"examples": [
[
"unet2d",
"pytorch",
"nucleus",
"segmentation",
"dsb2018"
]
],
"items": {
"type": "string"
},
"title": "Tags",
"type": "array"
},
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the resource following SemVer 2.0."
},
"version_number": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "version number (n-th published version, not the semantic version)",
"title": "Version Number"
},
"format_version": {
"const": "0.2.4",
"description": "The format version of this resource specification\n(not the `version` of the resource description)\nWhen creating a new resource always use the latest micro/patch version described here.\nThe `format_version` is important for any consumer software to understand how to parse the fields.",
"title": "Format Version",
"type": "string"
},
"badges": {
"description": "badges associated with this resource",
"items": {
"$ref": "#/$defs/BadgeDescr"
},
"title": "Badges",
"type": "array"
},
"documentation": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL or relative path to a markdown file with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.",
"examples": [
"https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
"README.md"
],
"title": "Documentation"
},
"license": {
"anyOf": [
{
"enum": [
"0BSD",
"AAL",
"Abstyles",
"AdaCore-doc",
"Adobe-2006",
"Adobe-Display-PostScript",
"Adobe-Glyph",
"Adobe-Utopia",
"ADSL",
"AFL-1.1",
"AFL-1.2",
"AFL-2.0",
"AFL-2.1",
"AFL-3.0",
"Afmparse",
"AGPL-1.0-only",
"AGPL-1.0-or-later",
"AGPL-3.0-only",
"AGPL-3.0-or-later",
"Aladdin",
"AMDPLPA",
"AML",
"AML-glslang",
"AMPAS",
"ANTLR-PD",
"ANTLR-PD-fallback",
"Apache-1.0",
"Apache-1.1",
"Apache-2.0",
"APAFML",
"APL-1.0",
"App-s2p",
"APSL-1.0",
"APSL-1.1",
"APSL-1.2",
"APSL-2.0",
"Arphic-1999",
"Artistic-1.0",
"Artistic-1.0-cl8",
"Artistic-1.0-Perl",
"Artistic-2.0",
"ASWF-Digital-Assets-1.0",
"ASWF-Digital-Assets-1.1",
"Baekmuk",
"Bahyph",
"Barr",
"bcrypt-Solar-Designer",
"Beerware",
"Bitstream-Charter",
"Bitstream-Vera",
"BitTorrent-1.0",
"BitTorrent-1.1",
"blessing",
"BlueOak-1.0.0",
"Boehm-GC",
"Borceux",
"Brian-Gladman-2-Clause",
"Brian-Gladman-3-Clause",
"BSD-1-Clause",
"BSD-2-Clause",
"BSD-2-Clause-Darwin",
"BSD-2-Clause-Patent",
"BSD-2-Clause-Views",
"BSD-3-Clause",
"BSD-3-Clause-acpica",
"BSD-3-Clause-Attribution",
"BSD-3-Clause-Clear",
"BSD-3-Clause-flex",
"BSD-3-Clause-HP",
"BSD-3-Clause-LBNL",
"BSD-3-Clause-Modification",
"BSD-3-Clause-No-Military-License",
"BSD-3-Clause-No-Nuclear-License",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-No-Nuclear-Warranty",
"BSD-3-Clause-Open-MPI",
"BSD-3-Clause-Sun",
"BSD-4-Clause",
"BSD-4-Clause-Shortened",
"BSD-4-Clause-UC",
"BSD-4.3RENO",
"BSD-4.3TAHOE",
"BSD-Advertising-Acknowledgement",
"BSD-Attribution-HPND-disclaimer",
"BSD-Inferno-Nettverk",
"BSD-Protection",
"BSD-Source-beginning-file",
"BSD-Source-Code",
"BSD-Systemics",
"BSD-Systemics-W3Works",
"BSL-1.0",
"BUSL-1.1",
"bzip2-1.0.6",
"C-UDA-1.0",
"CAL-1.0",
"CAL-1.0-Combined-Work-Exception",
"Caldera",
"Caldera-no-preamble",
"CATOSL-1.1",
"CC-BY-1.0",
"CC-BY-2.0",
"CC-BY-2.5",
"CC-BY-2.5-AU",
"CC-BY-3.0",
"CC-BY-3.0-AT",
"CC-BY-3.0-AU",
"CC-BY-3.0-DE",
"CC-BY-3.0-IGO",
"CC-BY-3.0-NL",
"CC-BY-3.0-US",
"CC-BY-4.0",
"CC-BY-NC-1.0",
"CC-BY-NC-2.0",
"CC-BY-NC-2.5",
"CC-BY-NC-3.0",
"CC-BY-NC-3.0-DE",
"CC-BY-NC-4.0",
"CC-BY-NC-ND-1.0",
"CC-BY-NC-ND-2.0",
"CC-BY-NC-ND-2.5",
"CC-BY-NC-ND-3.0",
"CC-BY-NC-ND-3.0-DE",
"CC-BY-NC-ND-3.0-IGO",
"CC-BY-NC-ND-4.0",
"CC-BY-NC-SA-1.0",
"CC-BY-NC-SA-2.0",
"CC-BY-NC-SA-2.0-DE",
"CC-BY-NC-SA-2.0-FR",
"CC-BY-NC-SA-2.0-UK",
"CC-BY-NC-SA-2.5",
"CC-BY-NC-SA-3.0",
"CC-BY-NC-SA-3.0-DE",
"CC-BY-NC-SA-3.0-IGO",
"CC-BY-NC-SA-4.0",
"CC-BY-ND-1.0",
"CC-BY-ND-2.0",
"CC-BY-ND-2.5",
"CC-BY-ND-3.0",
"CC-BY-ND-3.0-DE",
"CC-BY-ND-4.0",
"CC-BY-SA-1.0",
"CC-BY-SA-2.0",
"CC-BY-SA-2.0-UK",
"CC-BY-SA-2.1-JP",
"CC-BY-SA-2.5",
"CC-BY-SA-3.0",
"CC-BY-SA-3.0-AT",
"CC-BY-SA-3.0-DE",
"CC-BY-SA-3.0-IGO",
"CC-BY-SA-4.0",
"CC-PDDC",
"CC0-1.0",
"CDDL-1.0",
"CDDL-1.1",
"CDL-1.0",
"CDLA-Permissive-1.0",
"CDLA-Permissive-2.0",
"CDLA-Sharing-1.0",
"CECILL-1.0",
"CECILL-1.1",
"CECILL-2.0",
"CECILL-2.1",
"CECILL-B",
"CECILL-C",
"CERN-OHL-1.1",
"CERN-OHL-1.2",
"CERN-OHL-P-2.0",
"CERN-OHL-S-2.0",
"CERN-OHL-W-2.0",
"CFITSIO",
"check-cvs",
"checkmk",
"ClArtistic",
"Clips",
"CMU-Mach",
"CMU-Mach-nodoc",
"CNRI-Jython",
"CNRI-Python",
"CNRI-Python-GPL-Compatible",
"COIL-1.0",
"Community-Spec-1.0",
"Condor-1.1",
"copyleft-next-0.3.0",
"copyleft-next-0.3.1",
"Cornell-Lossless-JPEG",
"CPAL-1.0",
"CPL-1.0",
"CPOL-1.02",
"Cronyx",
"Crossword",
"CrystalStacker",
"CUA-OPL-1.0",
"Cube",
"curl",
"D-FSL-1.0",
"DEC-3-Clause",
"diffmark",
"DL-DE-BY-2.0",
"DL-DE-ZERO-2.0",
"DOC",
"Dotseqn",
"DRL-1.0",
"DRL-1.1",
"DSDP",
"dtoa",
"dvipdfm",
"ECL-1.0",
"ECL-2.0",
"EFL-1.0",
"EFL-2.0",
"eGenix",
"Elastic-2.0",
"Entessa",
"EPICS",
"EPL-1.0",
"EPL-2.0",
"ErlPL-1.1",
"etalab-2.0",
"EUDatagrid",
"EUPL-1.0",
"EUPL-1.1",
"EUPL-1.2",
"Eurosym",
"Fair",
"FBM",
"FDK-AAC",
"Ferguson-Twofish",
"Frameworx-1.0",
"FreeBSD-DOC",
"FreeImage",
"FSFAP",
"FSFAP-no-warranty-disclaimer",
"FSFUL",
"FSFULLR",
"FSFULLRWD",
"FTL",
"Furuseth",
"fwlw",
"GCR-docs",
"GD",
"GFDL-1.1-invariants-only",
"GFDL-1.1-invariants-or-later",
"GFDL-1.1-no-invariants-only",
"GFDL-1.1-no-invariants-or-later",
"GFDL-1.1-only",
"GFDL-1.1-or-later",
"GFDL-1.2-invariants-only",
"GFDL-1.2-invariants-or-later",
"GFDL-1.2-no-invariants-only",
"GFDL-1.2-no-invariants-or-later",
"GFDL-1.2-only",
"GFDL-1.2-or-later",
"GFDL-1.3-invariants-only",
"GFDL-1.3-invariants-or-later",
"GFDL-1.3-no-invariants-only",
"GFDL-1.3-no-invariants-or-later",
"GFDL-1.3-only",
"GFDL-1.3-or-later",
"Giftware",
"GL2PS",
"Glide",
"Glulxe",
"GLWTPL",
"gnuplot",
"GPL-1.0-only",
"GPL-1.0-or-later",
"GPL-2.0-only",
"GPL-2.0-or-later",
"GPL-3.0-only",
"GPL-3.0-or-later",
"Graphics-Gems",
"gSOAP-1.3b",
"gtkbook",
"HaskellReport",
"hdparm",
"Hippocratic-2.1",
"HP-1986",
"HP-1989",
"HPND",
"HPND-DEC",
"HPND-doc",
"HPND-doc-sell",
"HPND-export-US",
"HPND-export-US-modify",
"HPND-Fenneberg-Livingston",
"HPND-INRIA-IMAG",
"HPND-Kevlin-Henney",
"HPND-Markus-Kuhn",
"HPND-MIT-disclaimer",
"HPND-Pbmplus",
"HPND-sell-MIT-disclaimer-xserver",
"HPND-sell-regexpr",
"HPND-sell-variant",
"HPND-sell-variant-MIT-disclaimer",
"HPND-UC",
"HTMLTIDY",
"IBM-pibs",
"ICU",
"IEC-Code-Components-EULA",
"IJG",
"IJG-short",
"ImageMagick",
"iMatix",
"Imlib2",
"Info-ZIP",
"Inner-Net-2.0",
"Intel",
"Intel-ACPI",
"Interbase-1.0",
"IPA",
"IPL-1.0",
"ISC",
"ISC-Veillard",
"Jam",
"JasPer-2.0",
"JPL-image",
"JPNIC",
"JSON",
"Kastrup",
"Kazlib",
"Knuth-CTAN",
"LAL-1.2",
"LAL-1.3",
"Latex2e",
"Latex2e-translated-notice",
"Leptonica",
"LGPL-2.0-only",
"LGPL-2.0-or-later",
"LGPL-2.1-only",
"LGPL-2.1-or-later",
"LGPL-3.0-only",
"LGPL-3.0-or-later",
"LGPLLR",
"Libpng",
"libpng-2.0",
"libselinux-1.0",
"libtiff",
"libutil-David-Nugent",
"LiLiQ-P-1.1",
"LiLiQ-R-1.1",
"LiLiQ-Rplus-1.1",
"Linux-man-pages-1-para",
"Linux-man-pages-copyleft",
"Linux-man-pages-copyleft-2-para",
"Linux-man-pages-copyleft-var",
"Linux-OpenIB",
"LOOP",
"LPD-document",
"LPL-1.0",
"LPL-1.02",
"LPPL-1.0",
"LPPL-1.1",
"LPPL-1.2",
"LPPL-1.3a",
"LPPL-1.3c",
"lsof",
"Lucida-Bitmap-Fonts",
"LZMA-SDK-9.11-to-9.20",
"LZMA-SDK-9.22",
"Mackerras-3-Clause",
"Mackerras-3-Clause-acknowledgment",
"magaz",
"mailprio",
"MakeIndex",
"Martin-Birgmeier",
"McPhee-slideshow",
"metamail",
"Minpack",
"MirOS",
"MIT",
"MIT-0",
"MIT-advertising",
"MIT-CMU",
"MIT-enna",
"MIT-feh",
"MIT-Festival",
"MIT-Modern-Variant",
"MIT-open-group",
"MIT-testregex",
"MIT-Wu",
"MITNFA",
"MMIXware",
"Motosoto",
"MPEG-SSG",
"mpi-permissive",
"mpich2",
"MPL-1.0",
"MPL-1.1",
"MPL-2.0",
"MPL-2.0-no-copyleft-exception",
"mplus",
"MS-LPL",
"MS-PL",
"MS-RL",
"MTLL",
"MulanPSL-1.0",
"MulanPSL-2.0",
"Multics",
"Mup",
"NAIST-2003",
"NASA-1.3",
"Naumen",
"NBPL-1.0",
"NCGL-UK-2.0",
"NCSA",
"Net-SNMP",
"NetCDF",
"Newsletr",
"NGPL",
"NICTA-1.0",
"NIST-PD",
"NIST-PD-fallback",
"NIST-Software",
"NLOD-1.0",
"NLOD-2.0",
"NLPL",
"Nokia",
"NOSL",
"Noweb",
"NPL-1.0",
"NPL-1.1",
"NPOSL-3.0",
"NRL",
"NTP",
"NTP-0",
"O-UDA-1.0",
"OCCT-PL",
"OCLC-2.0",
"ODbL-1.0",
"ODC-By-1.0",
"OFFIS",
"OFL-1.0",
"OFL-1.0-no-RFN",
"OFL-1.0-RFN",
"OFL-1.1",
"OFL-1.1-no-RFN",
"OFL-1.1-RFN",
"OGC-1.0",
"OGDL-Taiwan-1.0",
"OGL-Canada-2.0",
"OGL-UK-1.0",
"OGL-UK-2.0",
"OGL-UK-3.0",
"OGTSL",
"OLDAP-1.1",
"OLDAP-1.2",
"OLDAP-1.3",
"OLDAP-1.4",
"OLDAP-2.0",
"OLDAP-2.0.1",
"OLDAP-2.1",
"OLDAP-2.2",
"OLDAP-2.2.1",
"OLDAP-2.2.2",
"OLDAP-2.3",
"OLDAP-2.4",
"OLDAP-2.5",
"OLDAP-2.6",
"OLDAP-2.7",
"OLDAP-2.8",
"OLFL-1.3",
"OML",
"OpenPBS-2.3",
"OpenSSL",
"OpenSSL-standalone",
"OpenVision",
"OPL-1.0",
"OPL-UK-3.0",
"OPUBL-1.0",
"OSET-PL-2.1",
"OSL-1.0",
"OSL-1.1",
"OSL-2.0",
"OSL-2.1",
"OSL-3.0",
"PADL",
"Parity-6.0.0",
"Parity-7.0.0",
"PDDL-1.0",
"PHP-3.0",
"PHP-3.01",
"Pixar",
"Plexus",
"pnmstitch",
"PolyForm-Noncommercial-1.0.0",
"PolyForm-Small-Business-1.0.0",
"PostgreSQL",
"PSF-2.0",
"psfrag",
"psutils",
"Python-2.0",
"Python-2.0.1",
"python-ldap",
"Qhull",
"QPL-1.0",
"QPL-1.0-INRIA-2004",
"radvd",
"Rdisc",
"RHeCos-1.1",
"RPL-1.1",
"RPL-1.5",
"RPSL-1.0",
"RSA-MD",
"RSCPL",
"Ruby",
"SAX-PD",
"SAX-PD-2.0",
"Saxpath",
"SCEA",
"SchemeReport",
"Sendmail",
"Sendmail-8.23",
"SGI-B-1.0",
"SGI-B-1.1",
"SGI-B-2.0",
"SGI-OpenGL",
"SGP4",
"SHL-0.5",
"SHL-0.51",
"SimPL-2.0",
"SISSL",
"SISSL-1.2",
"SL",
"Sleepycat",
"SMLNJ",
"SMPPL",
"SNIA",
"snprintf",
"softSurfer",
"Soundex",
"Spencer-86",
"Spencer-94",
"Spencer-99",
"SPL-1.0",
"ssh-keyscan",
"SSH-OpenSSH",
"SSH-short",
"SSLeay-standalone",
"SSPL-1.0",
"SugarCRM-1.1.3",
"Sun-PPP",
"SunPro",
"SWL",
"swrule",
"Symlinks",
"TAPR-OHL-1.0",
"TCL",
"TCP-wrappers",
"TermReadKey",
"TGPPL-1.0",
"TMate",
"TORQUE-1.1",
"TOSL",
"TPDL",
"TPL-1.0",
"TTWL",
"TTYP0",
"TU-Berlin-1.0",
"TU-Berlin-2.0",
"UCAR",
"UCL-1.0",
"ulem",
"UMich-Merit",
"Unicode-3.0",
"Unicode-DFS-2015",
"Unicode-DFS-2016",
"Unicode-TOU",
"UnixCrypt",
"Unlicense",
"UPL-1.0",
"URT-RLE",
"Vim",
"VOSTROM",
"VSL-1.0",
"W3C",
"W3C-19980720",
"W3C-20150513",
"w3m",
"Watcom-1.0",
"Widget-Workshop",
"Wsuipa",
"WTFPL",
"X11",
"X11-distribute-modifications-variant",
"Xdebug-1.03",
"Xerox",
"Xfig",
"XFree86-1.1",
"xinetd",
"xkeyboard-config-Zinoviev",
"xlock",
"Xnet",
"xpp",
"XSkat",
"YPL-1.0",
"YPL-1.1",
"Zed",
"Zeeff",
"Zend-2.0",
"Zimbra-1.3",
"Zimbra-1.4",
"Zlib",
"zlib-acknowledgement",
"ZPL-1.1",
"ZPL-2.0",
"ZPL-2.1"
],
"title": "LicenseId",
"type": "string"
},
{
"enum": [
"AGPL-1.0",
"AGPL-3.0",
"BSD-2-Clause-FreeBSD",
"BSD-2-Clause-NetBSD",
"bzip2-1.0.5",
"eCos-2.0",
"GFDL-1.1",
"GFDL-1.2",
"GFDL-1.3",
"GPL-1.0",
"GPL-1.0+",
"GPL-2.0",
"GPL-2.0+",
"GPL-2.0-with-autoconf-exception",
"GPL-2.0-with-bison-exception",
"GPL-2.0-with-classpath-exception",
"GPL-2.0-with-font-exception",
"GPL-2.0-with-GCC-exception",
"GPL-3.0",
"GPL-3.0+",
"GPL-3.0-with-autoconf-exception",
"GPL-3.0-with-GCC-exception",
"LGPL-2.0",
"LGPL-2.0+",
"LGPL-2.1",
"LGPL-2.1+",
"LGPL-3.0",
"LGPL-3.0+",
"Nunit",
"StandardML-NJ",
"wxWindows"
],
"title": "DeprecatedLicenseId",
"type": "string"
},
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose\n) to discuss your intentions with the community.",
"examples": [
"CC0-1.0",
"MIT",
"BSD-2-Clause"
],
"title": "License"
},
"type": {
"const": "dataset",
"title": "Type",
"type": "string"
},
"id": {
"anyOf": [
{
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
"title": "Id"
},
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "\"URL to the source of the dataset.",
"title": "Source"
}
},
"required": [
"name",
"description",
"format_version",
"type"
],
"title": "dataset 0.2.4",
"type": "object"
}
Fields:
-
_validation_summary(Optional[ValidationSummary]) -
_root(Union[RootHttpUrl, DirectoryPath, ZipFile]) -
_file_name(Optional[FileName]) -
name(NotEmpty[str]) -
description(str) -
covers(List[FileSource_cover]) -
id_emoji(Optional[str]) -
authors(List[Author]) -
attachments(Optional[AttachmentsDescr]) -
cite(List[CiteEntry]) -
config(Dict[str, YamlValue]) -
download_url(Optional[HttpUrl]) -
git_repo(Optional[str]) -
icon(Union[str, FileSource, None]) -
links(List[str]) -
uploader(Optional[Uploader]) -
maintainers(List[Maintainer]) -
rdf_source(Optional[FileSource]) -
tags(List[str]) -
version(Optional[Version]) -
version_number(Optional[int]) -
format_version(Literal['0.2.4']) -
badges(List[BadgeDescr]) -
documentation(Optional[FileSource]) -
license(Union[LicenseId, DeprecatedLicenseId, str, None]) -
type(Literal['dataset']) -
id(Optional[DatasetId]) -
source(Optional[HttpUrl])
attachments
pydantic-field
¤
attachments: Optional[AttachmentsDescr] = None
file and other attachments
authors
pydantic-field
¤
authors: List[Author]
The authors are the creators of the RDF and the primary points of contact.
config
pydantic-field
¤
config: Dict[str, YamlValue]
A field for custom configuration that can contain any keys not present in the RDF spec.
This means you should not store, for example, a github repo URL in config since we already have the
git_repo field defined in the spec.
Keys in config may be very specific to a tool or consumer software. To avoid conflicting definitions,
it is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,
for example:
config:
bioimageio: # here is the domain name
my_custom_key: 3837283
another_key:
nested: value
imagej: # config specific to ImageJ
macro_dir: path/to/macro/file
snake_case for keys in config.
You may want to list linked files additionally under attachments to include them when packaging a resource
(packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains
an altered rdf.yaml file with local references to the downloaded files)
covers
pydantic-field
¤
covers: List[FileSource_cover]
Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1.
documentation
pydantic-field
¤
documentation: Optional[FileSource] = None
URL or relative path to a markdown file with additional documentation.
The recommended documentation file name is README.md. An .md suffix is mandatory.
download_url
pydantic-field
¤
download_url: Optional[HttpUrl] = None
URL to download the resource from (deprecated)
file_name
property
¤
file_name: Optional[FileName]
File name of the bioimageio.yaml file the description was loaded from.
git_repo
pydantic-field
¤
git_repo: Optional[str] = None
A URL to the Git repository where the resource is being developed.
id
pydantic-field
¤
id: Optional[DatasetId] = None
bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.
implemented_format_version_tuple
class-attribute
¤
implemented_format_version_tuple: Tuple[int, int, int]
license
pydantic-field
¤
license: Union[
LicenseId, DeprecatedLicenseId, str, None
] = None
A SPDX license identifier. We do not support custom license beyond the SPDX license list, if you need that please open a GitHub issue to discuss your intentions with the community.
maintainers
pydantic-field
¤
maintainers: List[Maintainer]
Maintainers of this resource.
If not specified authors are maintainers and at least some of them should specify their github_user name
rdf_source
pydantic-field
¤
rdf_source: Optional[FileSource] = None
Resource description file (RDF) source; used to keep track of where an rdf.yaml was loaded from. Do not set this field in a YAML file.
root
property
¤
root: Union[RootHttpUrl, DirectoryPath, ZipFile]
The URL/Path prefix to resolve any relative paths with.
uploader
pydantic-field
¤
uploader: Optional[Uploader] = None
The person who uploaded the model (e.g. to bioimage.io)
version
pydantic-field
¤
version: Optional[Version] = None
The version of the resource following SemVer 2.0.
version_number
pydantic-field
¤
version_number: Optional[int] = None
version number (n-th published version, not the semantic version)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any)
Source code in src/bioimageio/spec/_internal/common_nodes.py
199 200 201 202 203 204 205 206 207 208 209 210 211 | |
accept_author_strings
classmethod
¤
accept_author_strings(
authors: Union[Any, Sequence[Any]],
) -> Any
we unofficially accept strings as author entries
Source code in src/bioimageio/spec/generic/v0_2.py
245 246 247 248 249 250 251 252 253 254 255 | |
deprecated_spdx_license
classmethod
¤
deprecated_spdx_license(
value: Optional[
Union[LicenseId, DeprecatedLicenseId, str]
],
)
Source code in src/bioimageio/spec/generic/v0_2.py
433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 | |
get_package_content
¤
get_package_content() -> Dict[
FileName, Union[FileDescr, BioimageioYamlContent]
]
Returns package content without creating the package.
Source code in src/bioimageio/spec/_internal/common_nodes.py
377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 | |
load
classmethod
¤
load(
data: BioimageioYamlContentView,
context: Optional[ValidationContext] = None,
) -> Union[Self, InvalidDescr]
factory method to create a resource description object
Source code in src/bioimageio/spec/_internal/common_nodes.py
213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
package
¤
package(
dest: Optional[
Union[ZipFile, IO[bytes], Path, str]
] = None,
) -> ZipFile
package the described resource as a zip archive
| PARAMETER | DESCRIPTION |
|---|---|
|
(path/bytes stream of) destination zipfile
TYPE:
|
Source code in src/bioimageio/spec/_internal/common_nodes.py
347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 | |
warn_about_tag_categories
classmethod
¤
warn_about_tag_categories(
value: List[str], info: ValidationInfo
) -> List[str]
Source code in src/bioimageio/spec/generic/v0_2.py
359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 | |
DatasetId
¤
Bases: ResourceId
flowchart TD
bioimageio.spec.model.v0_5.DatasetId[DatasetId]
bioimageio.spec.generic.v0_3.ResourceId[ResourceId]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec.generic.v0_3.ResourceId --> bioimageio.spec.model.v0_5.DatasetId
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.generic.v0_3.ResourceId
click bioimageio.spec.model.v0_5.DatasetId href "" "bioimageio.spec.model.v0_5.DatasetId"
click bioimageio.spec.generic.v0_3.ResourceId href "" "bioimageio.spec.generic.v0_3.ResourceId"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
root_model |
TYPE:
|
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
Datetime
¤
Bases: RootModel[datetime]
flowchart TD
bioimageio.spec.model.v0_5.Datetime[Datetime]
click bioimageio.spec.model.v0_5.Datetime href "" "bioimageio.spec.model.v0_5.Datetime"
Timestamp in ISO 8601 format with a few restrictions listed here.
| METHOD | DESCRIPTION |
|---|---|
now |
|
now
classmethod
¤
now()
Source code in src/bioimageio/spec/_internal/types.py
135 136 137 | |
DeprecatedLicenseId
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.DeprecatedLicenseId[DeprecatedLicenseId]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.DeprecatedLicenseId
click bioimageio.spec.model.v0_5.DeprecatedLicenseId href "" "bioimageio.spec.model.v0_5.DeprecatedLicenseId"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
root_model |
TYPE:
|
root_model
class-attribute
¤
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
Doi
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.Doi[Doi]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.Doi
click bioimageio.spec.model.v0_5.Doi href "" "bioimageio.spec.model.v0_5.Doi"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
A digital object identifier, see https://www.doi.org/
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
root_model |
TYPE:
|
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
EnsureDtypeDescr
pydantic-model
¤
Bases: ProcessingDescrBase
Cast the tensor data type to EnsureDtypeKwargs.dtype (if not matching).
This can for example be used to ensure the inner neural network model gets a different input tensor data type than the fully described bioimage.io model does.
Examples:
The described bioimage.io model (incl. preprocessing) accepts any float32-compatible tensor, normalizes it with percentiles and clipping and then casts it to uint8, which is what the neural network in this example expects. - in YAML
inputs:
- data:
type: float32 # described bioimage.io model is compatible with any float32 input tensor
preprocessing:
- id: scale_range
kwargs:
axes: ['y', 'x']
max_percentile: 99.8
min_percentile: 5.0
- id: clip
kwargs:
min: 0.0
max: 1.0
- id: ensure_dtype # the neural network of the model requires uint8
kwargs:
dtype: uint8
Show JSON schema:
{
"$defs": {
"EnsureDtypeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `EnsureDtypeDescr`",
"properties": {
"dtype": {
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"title": "Dtype",
"type": "string"
}
},
"required": [
"dtype"
],
"title": "model.v0_5.EnsureDtypeKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Cast the tensor data type to `EnsureDtypeKwargs.dtype` (if not matching).\n\nThis can for example be used to ensure the inner neural network model gets a\ndifferent input tensor data type than the fully described bioimage.io model does.\n\nExamples:\n The described bioimage.io model (incl. preprocessing) accepts any\n float32-compatible tensor, normalizes it with percentiles and clipping and then\n casts it to uint8, which is what the neural network in this example expects.\n - in YAML\n ```yaml\n inputs:\n - data:\n type: float32 # described bioimage.io model is compatible with any float32 input tensor\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n - id: ensure_dtype # the neural network of the model requires uint8\n kwargs:\n dtype: uint8\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(kwargs=ClipKwargs(min=0.0, max=1.0)),\n ... EnsureDtypeDescr(kwargs=EnsureDtypeKwargs(dtype=\"uint8\")),\n ... ]",
"properties": {
"id": {
"const": "ensure_dtype",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/EnsureDtypeKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.EnsureDtypeDescr",
"type": "object"
}
Fields:
-
id(Literal['ensure_dtype']) -
kwargs(EnsureDtypeKwargs)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
EnsureDtypeKwargs
pydantic-model
¤
Bases: ProcessingKwargs
key word arguments for EnsureDtypeDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for `EnsureDtypeDescr`",
"properties": {
"dtype": {
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"title": "Dtype",
"type": "string"
}
},
"required": [
"dtype"
],
"title": "model.v0_5.EnsureDtypeKwargs",
"type": "object"
}
Fields:
-
dtype(Literal['float32', 'float64', 'uint8', 'int8', 'uint16', 'int16', 'uint32', 'int32', 'uint64', 'int64', 'bool'])
dtype
pydantic-field
¤
dtype: Literal[
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool",
]
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
425 426 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
419 420 421 422 423 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
416 417 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
FileDescr
pydantic-model
¤
Bases: Node
A file description
Show JSON schema:
{
"$defs": {
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
}
},
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
}
Fields:
-
source(FileSource) -
sha256(Optional[Sha256])
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
306 307 308 309 310 311 312 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
298 299 300 301 302 303 304 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 | |
FixedZeroMeanUnitVarianceAlongAxisKwargs
pydantic-model
¤
Bases: ProcessingKwargs
key word arguments for FixedZeroMeanUnitVarianceDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for `FixedZeroMeanUnitVarianceDescr`",
"properties": {
"mean": {
"description": "The mean value(s) to normalize with.",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Mean",
"type": "array"
},
"std": {
"description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
"items": {
"minimum": 1e-06,
"type": "number"
},
"minItems": 1,
"title": "Std",
"type": "array"
},
"axis": {
"description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
"examples": [
"channel",
"index"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"mean",
"std",
"axis"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
"type": "object"
}
Fields:
Validators:
-
_mean_and_std_match
axis
pydantic-field
¤
axis: NonBatchAxisId
The axis of the mean/std values to normalize each entry along that dimension separately.
std
pydantic-field
¤
std: NotEmpty[List[float]]
The standard deviation value(s) to normalize with.
Size must match mean values.
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
425 426 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
419 420 421 422 423 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
416 417 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
FixedZeroMeanUnitVarianceDescr
pydantic-model
¤
Bases: ProcessingDescrBase
Subtract a given mean and divide by the standard deviation.
Normalize with fixed, precomputed values for
FixedZeroMeanUnitVarianceKwargs.mean and FixedZeroMeanUnitVarianceKwargs.std
Use FixedZeroMeanUnitVarianceAlongAxisKwargs for independent scaling along given
axes.
Examples:
-
scalar value for whole tensor
- in YAML
preprocessing: - id: fixed_zero_mean_unit_variance kwargs: mean: 103.5 std: 13.7 - in Python
preprocessing = [FixedZeroMeanUnitVarianceDescr( ... kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7) ... )]
- in YAML
-
independently along an axis
- in YAML
preprocessing: - id: fixed_zero_mean_unit_variance kwargs: axis: channel mean: [101.5, 102.5, 103.5] std: [11.7, 12.7, 13.7] - in Python
preprocessing = [FixedZeroMeanUnitVarianceDescr( ... kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs( ... axis=AxisId("channel"), ... mean=[101.5, 102.5, 103.5], ... std=[11.7, 12.7, 13.7], ... ) ... )]
- in YAML
Show JSON schema:
{
"$defs": {
"FixedZeroMeanUnitVarianceAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for `FixedZeroMeanUnitVarianceDescr`",
"properties": {
"mean": {
"description": "The mean value(s) to normalize with.",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Mean",
"type": "array"
},
"std": {
"description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
"items": {
"minimum": 1e-06,
"type": "number"
},
"minItems": 1,
"title": "Std",
"type": "array"
},
"axis": {
"description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
"examples": [
"channel",
"index"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"mean",
"std",
"axis"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
"type": "object"
},
"FixedZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `FixedZeroMeanUnitVarianceDescr`",
"properties": {
"mean": {
"description": "The mean value to normalize with.",
"title": "Mean",
"type": "number"
},
"std": {
"description": "The standard deviation value to normalize with.",
"minimum": 1e-06,
"title": "Std",
"type": "number"
}
},
"required": [
"mean",
"std"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Subtract a given mean and divide by the standard deviation.\n\nNormalize with fixed, precomputed values for\n`FixedZeroMeanUnitVarianceKwargs.mean` and `FixedZeroMeanUnitVarianceKwargs.std`\nUse `FixedZeroMeanUnitVarianceAlongAxisKwargs` for independent scaling along given\naxes.\n\nExamples:\n1. scalar value for whole tensor\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n mean: 103.5\n std: 13.7\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7)\n ... )]\n\n2. independently along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n axis: channel\n mean: [101.5, 102.5, 103.5]\n std: [11.7, 12.7, 13.7]\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... mean=[101.5, 102.5, 103.5],\n ... std=[11.7, 12.7, 13.7],\n ... )\n ... )]",
"properties": {
"id": {
"const": "fixed_zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceKwargs"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceDescr",
"type": "object"
}
Fields:
-
id(Literal['fixed_zero_mean_unit_variance']) -
kwargs(Union[FixedZeroMeanUnitVarianceKwargs, FixedZeroMeanUnitVarianceAlongAxisKwargs])
id
pydantic-field
¤
id: Literal["fixed_zero_mean_unit_variance"] = (
"fixed_zero_mean_unit_variance"
)
implemented_id
class-attribute
¤
implemented_id: Literal["fixed_zero_mean_unit_variance"] = (
"fixed_zero_mean_unit_variance"
)
kwargs
pydantic-field
¤
kwargs: Union[
FixedZeroMeanUnitVarianceKwargs,
FixedZeroMeanUnitVarianceAlongAxisKwargs,
]
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
FixedZeroMeanUnitVarianceKwargs
pydantic-model
¤
Bases: ProcessingKwargs
key word arguments for FixedZeroMeanUnitVarianceDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for `FixedZeroMeanUnitVarianceDescr`",
"properties": {
"mean": {
"description": "The mean value to normalize with.",
"title": "Mean",
"type": "number"
},
"std": {
"description": "The standard deviation value to normalize with.",
"minimum": 1e-06,
"title": "Std",
"type": "number"
}
},
"required": [
"mean",
"std"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
"type": "object"
}
Fields:
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
425 426 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
419 420 421 422 423 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
416 417 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
GenericModelDescrBase
pydantic-model
¤
Bases: ResourceDescrBase
Base for all resource descriptions including of model descriptions
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"CiteEntry": {
"additionalProperties": false,
"description": "A citation that should be referenced in work using this resource.",
"properties": {
"text": {
"description": "free text description",
"title": "Text",
"type": "string"
},
"doi": {
"anyOf": [
{
"description": "A digital object identifier, see https://www.doi.org/",
"pattern": "^10\\.[0-9]{4}.+$",
"title": "Doi",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details.\nNote:\n Either **doi** or **url** have to be specified.",
"title": "Doi"
},
"url": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to cite (preferably specify a **doi** instead/also).\nNote:\n Either **doi** or **url** have to be specified.",
"title": "Url"
}
},
"required": [
"text"
],
"title": "generic.v0_3.CiteEntry",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"Maintainer": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Name"
},
"github_user": {
"title": "Github User",
"type": "string"
}
},
"required": [
"github_user"
],
"title": "generic.v0_3.Maintainer",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"Uploader": {
"additionalProperties": false,
"properties": {
"email": {
"description": "Email",
"format": "email",
"title": "Email",
"type": "string"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "name",
"title": "Name"
}
},
"required": [
"email"
],
"title": "generic.v0_2.Uploader",
"type": "object"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"description": "Base for all resource descriptions including of model descriptions",
"properties": {
"name": {
"description": "A human-friendly name of the resource description.\nMay only contains letters, digits, underscore, minus, parentheses and spaces.",
"maxLength": 128,
"minLength": 5,
"title": "Name",
"type": "string"
},
"description": {
"default": "",
"description": "A string containing a brief description.",
"maxLength": 1024,
"title": "Description",
"type": "string"
},
"covers": {
"description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1 or 1:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg')",
"examples": [
[
"cover.png"
]
],
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Covers",
"type": "array"
},
"id_emoji": {
"anyOf": [
{
"examples": [
"\ud83e\udd88",
"\ud83e\udda5"
],
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "UTF-8 emoji for display alongside the `id`.",
"title": "Id Emoji"
},
"authors": {
"description": "The authors are the creators of this resource description and the primary points of contact.",
"items": {
"$ref": "#/$defs/Author"
},
"title": "Authors",
"type": "array"
},
"attachments": {
"description": "file attachments",
"items": {
"$ref": "#/$defs/FileDescr"
},
"title": "Attachments",
"type": "array"
},
"cite": {
"description": "citations",
"items": {
"$ref": "#/$defs/CiteEntry"
},
"title": "Cite",
"type": "array"
},
"license": {
"anyOf": [
{
"enum": [
"0BSD",
"AAL",
"Abstyles",
"AdaCore-doc",
"Adobe-2006",
"Adobe-Display-PostScript",
"Adobe-Glyph",
"Adobe-Utopia",
"ADSL",
"AFL-1.1",
"AFL-1.2",
"AFL-2.0",
"AFL-2.1",
"AFL-3.0",
"Afmparse",
"AGPL-1.0-only",
"AGPL-1.0-or-later",
"AGPL-3.0-only",
"AGPL-3.0-or-later",
"Aladdin",
"AMDPLPA",
"AML",
"AML-glslang",
"AMPAS",
"ANTLR-PD",
"ANTLR-PD-fallback",
"Apache-1.0",
"Apache-1.1",
"Apache-2.0",
"APAFML",
"APL-1.0",
"App-s2p",
"APSL-1.0",
"APSL-1.1",
"APSL-1.2",
"APSL-2.0",
"Arphic-1999",
"Artistic-1.0",
"Artistic-1.0-cl8",
"Artistic-1.0-Perl",
"Artistic-2.0",
"ASWF-Digital-Assets-1.0",
"ASWF-Digital-Assets-1.1",
"Baekmuk",
"Bahyph",
"Barr",
"bcrypt-Solar-Designer",
"Beerware",
"Bitstream-Charter",
"Bitstream-Vera",
"BitTorrent-1.0",
"BitTorrent-1.1",
"blessing",
"BlueOak-1.0.0",
"Boehm-GC",
"Borceux",
"Brian-Gladman-2-Clause",
"Brian-Gladman-3-Clause",
"BSD-1-Clause",
"BSD-2-Clause",
"BSD-2-Clause-Darwin",
"BSD-2-Clause-Patent",
"BSD-2-Clause-Views",
"BSD-3-Clause",
"BSD-3-Clause-acpica",
"BSD-3-Clause-Attribution",
"BSD-3-Clause-Clear",
"BSD-3-Clause-flex",
"BSD-3-Clause-HP",
"BSD-3-Clause-LBNL",
"BSD-3-Clause-Modification",
"BSD-3-Clause-No-Military-License",
"BSD-3-Clause-No-Nuclear-License",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-No-Nuclear-Warranty",
"BSD-3-Clause-Open-MPI",
"BSD-3-Clause-Sun",
"BSD-4-Clause",
"BSD-4-Clause-Shortened",
"BSD-4-Clause-UC",
"BSD-4.3RENO",
"BSD-4.3TAHOE",
"BSD-Advertising-Acknowledgement",
"BSD-Attribution-HPND-disclaimer",
"BSD-Inferno-Nettverk",
"BSD-Protection",
"BSD-Source-beginning-file",
"BSD-Source-Code",
"BSD-Systemics",
"BSD-Systemics-W3Works",
"BSL-1.0",
"BUSL-1.1",
"bzip2-1.0.6",
"C-UDA-1.0",
"CAL-1.0",
"CAL-1.0-Combined-Work-Exception",
"Caldera",
"Caldera-no-preamble",
"CATOSL-1.1",
"CC-BY-1.0",
"CC-BY-2.0",
"CC-BY-2.5",
"CC-BY-2.5-AU",
"CC-BY-3.0",
"CC-BY-3.0-AT",
"CC-BY-3.0-AU",
"CC-BY-3.0-DE",
"CC-BY-3.0-IGO",
"CC-BY-3.0-NL",
"CC-BY-3.0-US",
"CC-BY-4.0",
"CC-BY-NC-1.0",
"CC-BY-NC-2.0",
"CC-BY-NC-2.5",
"CC-BY-NC-3.0",
"CC-BY-NC-3.0-DE",
"CC-BY-NC-4.0",
"CC-BY-NC-ND-1.0",
"CC-BY-NC-ND-2.0",
"CC-BY-NC-ND-2.5",
"CC-BY-NC-ND-3.0",
"CC-BY-NC-ND-3.0-DE",
"CC-BY-NC-ND-3.0-IGO",
"CC-BY-NC-ND-4.0",
"CC-BY-NC-SA-1.0",
"CC-BY-NC-SA-2.0",
"CC-BY-NC-SA-2.0-DE",
"CC-BY-NC-SA-2.0-FR",
"CC-BY-NC-SA-2.0-UK",
"CC-BY-NC-SA-2.5",
"CC-BY-NC-SA-3.0",
"CC-BY-NC-SA-3.0-DE",
"CC-BY-NC-SA-3.0-IGO",
"CC-BY-NC-SA-4.0",
"CC-BY-ND-1.0",
"CC-BY-ND-2.0",
"CC-BY-ND-2.5",
"CC-BY-ND-3.0",
"CC-BY-ND-3.0-DE",
"CC-BY-ND-4.0",
"CC-BY-SA-1.0",
"CC-BY-SA-2.0",
"CC-BY-SA-2.0-UK",
"CC-BY-SA-2.1-JP",
"CC-BY-SA-2.5",
"CC-BY-SA-3.0",
"CC-BY-SA-3.0-AT",
"CC-BY-SA-3.0-DE",
"CC-BY-SA-3.0-IGO",
"CC-BY-SA-4.0",
"CC-PDDC",
"CC0-1.0",
"CDDL-1.0",
"CDDL-1.1",
"CDL-1.0",
"CDLA-Permissive-1.0",
"CDLA-Permissive-2.0",
"CDLA-Sharing-1.0",
"CECILL-1.0",
"CECILL-1.1",
"CECILL-2.0",
"CECILL-2.1",
"CECILL-B",
"CECILL-C",
"CERN-OHL-1.1",
"CERN-OHL-1.2",
"CERN-OHL-P-2.0",
"CERN-OHL-S-2.0",
"CERN-OHL-W-2.0",
"CFITSIO",
"check-cvs",
"checkmk",
"ClArtistic",
"Clips",
"CMU-Mach",
"CMU-Mach-nodoc",
"CNRI-Jython",
"CNRI-Python",
"CNRI-Python-GPL-Compatible",
"COIL-1.0",
"Community-Spec-1.0",
"Condor-1.1",
"copyleft-next-0.3.0",
"copyleft-next-0.3.1",
"Cornell-Lossless-JPEG",
"CPAL-1.0",
"CPL-1.0",
"CPOL-1.02",
"Cronyx",
"Crossword",
"CrystalStacker",
"CUA-OPL-1.0",
"Cube",
"curl",
"D-FSL-1.0",
"DEC-3-Clause",
"diffmark",
"DL-DE-BY-2.0",
"DL-DE-ZERO-2.0",
"DOC",
"Dotseqn",
"DRL-1.0",
"DRL-1.1",
"DSDP",
"dtoa",
"dvipdfm",
"ECL-1.0",
"ECL-2.0",
"EFL-1.0",
"EFL-2.0",
"eGenix",
"Elastic-2.0",
"Entessa",
"EPICS",
"EPL-1.0",
"EPL-2.0",
"ErlPL-1.1",
"etalab-2.0",
"EUDatagrid",
"EUPL-1.0",
"EUPL-1.1",
"EUPL-1.2",
"Eurosym",
"Fair",
"FBM",
"FDK-AAC",
"Ferguson-Twofish",
"Frameworx-1.0",
"FreeBSD-DOC",
"FreeImage",
"FSFAP",
"FSFAP-no-warranty-disclaimer",
"FSFUL",
"FSFULLR",
"FSFULLRWD",
"FTL",
"Furuseth",
"fwlw",
"GCR-docs",
"GD",
"GFDL-1.1-invariants-only",
"GFDL-1.1-invariants-or-later",
"GFDL-1.1-no-invariants-only",
"GFDL-1.1-no-invariants-or-later",
"GFDL-1.1-only",
"GFDL-1.1-or-later",
"GFDL-1.2-invariants-only",
"GFDL-1.2-invariants-or-later",
"GFDL-1.2-no-invariants-only",
"GFDL-1.2-no-invariants-or-later",
"GFDL-1.2-only",
"GFDL-1.2-or-later",
"GFDL-1.3-invariants-only",
"GFDL-1.3-invariants-or-later",
"GFDL-1.3-no-invariants-only",
"GFDL-1.3-no-invariants-or-later",
"GFDL-1.3-only",
"GFDL-1.3-or-later",
"Giftware",
"GL2PS",
"Glide",
"Glulxe",
"GLWTPL",
"gnuplot",
"GPL-1.0-only",
"GPL-1.0-or-later",
"GPL-2.0-only",
"GPL-2.0-or-later",
"GPL-3.0-only",
"GPL-3.0-or-later",
"Graphics-Gems",
"gSOAP-1.3b",
"gtkbook",
"HaskellReport",
"hdparm",
"Hippocratic-2.1",
"HP-1986",
"HP-1989",
"HPND",
"HPND-DEC",
"HPND-doc",
"HPND-doc-sell",
"HPND-export-US",
"HPND-export-US-modify",
"HPND-Fenneberg-Livingston",
"HPND-INRIA-IMAG",
"HPND-Kevlin-Henney",
"HPND-Markus-Kuhn",
"HPND-MIT-disclaimer",
"HPND-Pbmplus",
"HPND-sell-MIT-disclaimer-xserver",
"HPND-sell-regexpr",
"HPND-sell-variant",
"HPND-sell-variant-MIT-disclaimer",
"HPND-UC",
"HTMLTIDY",
"IBM-pibs",
"ICU",
"IEC-Code-Components-EULA",
"IJG",
"IJG-short",
"ImageMagick",
"iMatix",
"Imlib2",
"Info-ZIP",
"Inner-Net-2.0",
"Intel",
"Intel-ACPI",
"Interbase-1.0",
"IPA",
"IPL-1.0",
"ISC",
"ISC-Veillard",
"Jam",
"JasPer-2.0",
"JPL-image",
"JPNIC",
"JSON",
"Kastrup",
"Kazlib",
"Knuth-CTAN",
"LAL-1.2",
"LAL-1.3",
"Latex2e",
"Latex2e-translated-notice",
"Leptonica",
"LGPL-2.0-only",
"LGPL-2.0-or-later",
"LGPL-2.1-only",
"LGPL-2.1-or-later",
"LGPL-3.0-only",
"LGPL-3.0-or-later",
"LGPLLR",
"Libpng",
"libpng-2.0",
"libselinux-1.0",
"libtiff",
"libutil-David-Nugent",
"LiLiQ-P-1.1",
"LiLiQ-R-1.1",
"LiLiQ-Rplus-1.1",
"Linux-man-pages-1-para",
"Linux-man-pages-copyleft",
"Linux-man-pages-copyleft-2-para",
"Linux-man-pages-copyleft-var",
"Linux-OpenIB",
"LOOP",
"LPD-document",
"LPL-1.0",
"LPL-1.02",
"LPPL-1.0",
"LPPL-1.1",
"LPPL-1.2",
"LPPL-1.3a",
"LPPL-1.3c",
"lsof",
"Lucida-Bitmap-Fonts",
"LZMA-SDK-9.11-to-9.20",
"LZMA-SDK-9.22",
"Mackerras-3-Clause",
"Mackerras-3-Clause-acknowledgment",
"magaz",
"mailprio",
"MakeIndex",
"Martin-Birgmeier",
"McPhee-slideshow",
"metamail",
"Minpack",
"MirOS",
"MIT",
"MIT-0",
"MIT-advertising",
"MIT-CMU",
"MIT-enna",
"MIT-feh",
"MIT-Festival",
"MIT-Modern-Variant",
"MIT-open-group",
"MIT-testregex",
"MIT-Wu",
"MITNFA",
"MMIXware",
"Motosoto",
"MPEG-SSG",
"mpi-permissive",
"mpich2",
"MPL-1.0",
"MPL-1.1",
"MPL-2.0",
"MPL-2.0-no-copyleft-exception",
"mplus",
"MS-LPL",
"MS-PL",
"MS-RL",
"MTLL",
"MulanPSL-1.0",
"MulanPSL-2.0",
"Multics",
"Mup",
"NAIST-2003",
"NASA-1.3",
"Naumen",
"NBPL-1.0",
"NCGL-UK-2.0",
"NCSA",
"Net-SNMP",
"NetCDF",
"Newsletr",
"NGPL",
"NICTA-1.0",
"NIST-PD",
"NIST-PD-fallback",
"NIST-Software",
"NLOD-1.0",
"NLOD-2.0",
"NLPL",
"Nokia",
"NOSL",
"Noweb",
"NPL-1.0",
"NPL-1.1",
"NPOSL-3.0",
"NRL",
"NTP",
"NTP-0",
"O-UDA-1.0",
"OCCT-PL",
"OCLC-2.0",
"ODbL-1.0",
"ODC-By-1.0",
"OFFIS",
"OFL-1.0",
"OFL-1.0-no-RFN",
"OFL-1.0-RFN",
"OFL-1.1",
"OFL-1.1-no-RFN",
"OFL-1.1-RFN",
"OGC-1.0",
"OGDL-Taiwan-1.0",
"OGL-Canada-2.0",
"OGL-UK-1.0",
"OGL-UK-2.0",
"OGL-UK-3.0",
"OGTSL",
"OLDAP-1.1",
"OLDAP-1.2",
"OLDAP-1.3",
"OLDAP-1.4",
"OLDAP-2.0",
"OLDAP-2.0.1",
"OLDAP-2.1",
"OLDAP-2.2",
"OLDAP-2.2.1",
"OLDAP-2.2.2",
"OLDAP-2.3",
"OLDAP-2.4",
"OLDAP-2.5",
"OLDAP-2.6",
"OLDAP-2.7",
"OLDAP-2.8",
"OLFL-1.3",
"OML",
"OpenPBS-2.3",
"OpenSSL",
"OpenSSL-standalone",
"OpenVision",
"OPL-1.0",
"OPL-UK-3.0",
"OPUBL-1.0",
"OSET-PL-2.1",
"OSL-1.0",
"OSL-1.1",
"OSL-2.0",
"OSL-2.1",
"OSL-3.0",
"PADL",
"Parity-6.0.0",
"Parity-7.0.0",
"PDDL-1.0",
"PHP-3.0",
"PHP-3.01",
"Pixar",
"Plexus",
"pnmstitch",
"PolyForm-Noncommercial-1.0.0",
"PolyForm-Small-Business-1.0.0",
"PostgreSQL",
"PSF-2.0",
"psfrag",
"psutils",
"Python-2.0",
"Python-2.0.1",
"python-ldap",
"Qhull",
"QPL-1.0",
"QPL-1.0-INRIA-2004",
"radvd",
"Rdisc",
"RHeCos-1.1",
"RPL-1.1",
"RPL-1.5",
"RPSL-1.0",
"RSA-MD",
"RSCPL",
"Ruby",
"SAX-PD",
"SAX-PD-2.0",
"Saxpath",
"SCEA",
"SchemeReport",
"Sendmail",
"Sendmail-8.23",
"SGI-B-1.0",
"SGI-B-1.1",
"SGI-B-2.0",
"SGI-OpenGL",
"SGP4",
"SHL-0.5",
"SHL-0.51",
"SimPL-2.0",
"SISSL",
"SISSL-1.2",
"SL",
"Sleepycat",
"SMLNJ",
"SMPPL",
"SNIA",
"snprintf",
"softSurfer",
"Soundex",
"Spencer-86",
"Spencer-94",
"Spencer-99",
"SPL-1.0",
"ssh-keyscan",
"SSH-OpenSSH",
"SSH-short",
"SSLeay-standalone",
"SSPL-1.0",
"SugarCRM-1.1.3",
"Sun-PPP",
"SunPro",
"SWL",
"swrule",
"Symlinks",
"TAPR-OHL-1.0",
"TCL",
"TCP-wrappers",
"TermReadKey",
"TGPPL-1.0",
"TMate",
"TORQUE-1.1",
"TOSL",
"TPDL",
"TPL-1.0",
"TTWL",
"TTYP0",
"TU-Berlin-1.0",
"TU-Berlin-2.0",
"UCAR",
"UCL-1.0",
"ulem",
"UMich-Merit",
"Unicode-3.0",
"Unicode-DFS-2015",
"Unicode-DFS-2016",
"Unicode-TOU",
"UnixCrypt",
"Unlicense",
"UPL-1.0",
"URT-RLE",
"Vim",
"VOSTROM",
"VSL-1.0",
"W3C",
"W3C-19980720",
"W3C-20150513",
"w3m",
"Watcom-1.0",
"Widget-Workshop",
"Wsuipa",
"WTFPL",
"X11",
"X11-distribute-modifications-variant",
"Xdebug-1.03",
"Xerox",
"Xfig",
"XFree86-1.1",
"xinetd",
"xkeyboard-config-Zinoviev",
"xlock",
"Xnet",
"xpp",
"XSkat",
"YPL-1.0",
"YPL-1.1",
"Zed",
"Zeeff",
"Zend-2.0",
"Zimbra-1.3",
"Zimbra-1.4",
"Zlib",
"zlib-acknowledgement",
"ZPL-1.1",
"ZPL-2.0",
"ZPL-2.1"
],
"title": "LicenseId",
"type": "string"
},
{
"enum": [
"AGPL-1.0",
"AGPL-3.0",
"BSD-2-Clause-FreeBSD",
"BSD-2-Clause-NetBSD",
"bzip2-1.0.5",
"eCos-2.0",
"GFDL-1.1",
"GFDL-1.2",
"GFDL-1.3",
"GPL-1.0",
"GPL-1.0+",
"GPL-2.0",
"GPL-2.0+",
"GPL-2.0-with-autoconf-exception",
"GPL-2.0-with-bison-exception",
"GPL-2.0-with-classpath-exception",
"GPL-2.0-with-font-exception",
"GPL-2.0-with-GCC-exception",
"GPL-3.0",
"GPL-3.0+",
"GPL-3.0-with-autoconf-exception",
"GPL-3.0-with-GCC-exception",
"LGPL-2.0",
"LGPL-2.0+",
"LGPL-2.1",
"LGPL-2.1+",
"LGPL-3.0",
"LGPL-3.0+",
"Nunit",
"StandardML-NJ",
"wxWindows"
],
"title": "DeprecatedLicenseId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose)\nto discuss your intentions with the community.",
"examples": [
"CC0-1.0",
"MIT",
"BSD-2-Clause"
],
"title": "License"
},
"git_repo": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A URL to the Git repository where the resource is being developed.",
"examples": [
"https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
],
"title": "Git Repo"
},
"icon": {
"anyOf": [
{
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An icon for illustration, e.g. on bioimage.io",
"title": "Icon"
},
"links": {
"description": "IDs of other bioimage.io resources",
"examples": [
[
"ilastik/ilastik",
"deepimagej/deepimagej",
"zero/notebook_u-net_3d_zerocostdl4mic"
]
],
"items": {
"type": "string"
},
"title": "Links",
"type": "array"
},
"uploader": {
"anyOf": [
{
"$ref": "#/$defs/Uploader"
},
{
"type": "null"
}
],
"default": null,
"description": "The person who uploaded the model (e.g. to bioimage.io)"
},
"maintainers": {
"description": "Maintainers of this resource.\nIf not specified, `authors` are maintainers and at least some of them has to specify their `github_user` name",
"items": {
"$ref": "#/$defs/Maintainer"
},
"title": "Maintainers",
"type": "array"
},
"tags": {
"description": "Associated tags",
"examples": [
[
"unet2d",
"pytorch",
"nucleus",
"segmentation",
"dsb2018"
]
],
"items": {
"type": "string"
},
"title": "Tags",
"type": "array"
},
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the resource following SemVer 2.0."
},
"version_comment": {
"anyOf": [
{
"maxLength": 512,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A comment on the version of the resource.",
"title": "Version Comment"
}
},
"required": [
"name"
],
"title": "generic.v0_3.GenericModelDescrBase",
"type": "object"
}
Fields:
-
_validation_summary(Optional[ValidationSummary]) -
_root(Union[RootHttpUrl, DirectoryPath, ZipFile]) -
_file_name(Optional[FileName]) -
name(str) -
description(FAIR[str]) -
covers(List[FileSource_cover]) -
id_emoji(Optional[str]) -
authors(FAIR[List[Author]]) -
attachments(List[FileDescr_]) -
cite(FAIR[List[CiteEntry]]) -
license(FAIR[Union[LicenseId, DeprecatedLicenseId, None]]) -
git_repo(Optional[HttpUrl]) -
icon(Union[str, FileSource_, None]) -
links(List[str]) -
uploader(Optional[Uploader]) -
maintainers(List[Maintainer]) -
tags(FAIR[List[str]]) -
version(Optional[Version]) -
version_comment(Optional[str])
authors
pydantic-field
¤
The authors are the creators of this resource description and the primary points of contact.
file_name
property
¤
file_name: Optional[FileName]
File name of the bioimageio.yaml file the description was loaded from.
git_repo
pydantic-field
¤
git_repo: Optional[HttpUrl] = None
A URL to the Git repository where the resource is being developed.
icon
pydantic-field
¤
icon: Union[str, FileSource_, None] = None
An icon for illustration, e.g. on bioimage.io
implemented_format_version_tuple
class-attribute
¤
implemented_format_version_tuple: Tuple[int, int, int]
license
pydantic-field
¤
license: FAIR[
Union[LicenseId, DeprecatedLicenseId, None]
] = None
A SPDX license identifier. We do not support custom license beyond the SPDX license list, if you need that please open a GitHub issue to discuss your intentions with the community.
maintainers
pydantic-field
¤
maintainers: List[Maintainer]
Maintainers of this resource.
If not specified, authors are maintainers and at least some of them has to specify their github_user name
name
pydantic-field
¤
name: str
A human-friendly name of the resource description. May only contains letters, digits, underscore, minus, parentheses and spaces.
root
property
¤
root: Union[RootHttpUrl, DirectoryPath, ZipFile]
The URL/Path prefix to resolve any relative paths with.
uploader
pydantic-field
¤
uploader: Optional[Uploader] = None
The person who uploaded the model (e.g. to bioimage.io)
version
pydantic-field
¤
version: Optional[Version] = None
The version of the resource following SemVer 2.0.
version_comment
pydantic-field
¤
version_comment: Optional[str] = None
A comment on the version of the resource.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any)
Source code in src/bioimageio/spec/_internal/common_nodes.py
199 200 201 202 203 204 205 206 207 208 209 210 211 | |
get_package_content
¤
get_package_content() -> Dict[
FileName, Union[FileDescr, BioimageioYamlContent]
]
Returns package content without creating the package.
Source code in src/bioimageio/spec/_internal/common_nodes.py
377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 | |
load
classmethod
¤
load(
data: BioimageioYamlContentView,
context: Optional[ValidationContext] = None,
) -> Union[Self, InvalidDescr]
factory method to create a resource description object
Source code in src/bioimageio/spec/_internal/common_nodes.py
213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
package
¤
package(
dest: Optional[
Union[ZipFile, IO[bytes], Path, str]
] = None,
) -> ZipFile
package the described resource as a zip archive
| PARAMETER | DESCRIPTION |
|---|---|
|
(path/bytes stream of) destination zipfile
TYPE:
|
Source code in src/bioimageio/spec/_internal/common_nodes.py
347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 | |
warn_about_tag_categories
classmethod
¤
warn_about_tag_categories(
value: List[str], info: ValidationInfo
) -> List[str]
Source code in src/bioimageio/spec/generic/v0_3.py
384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 | |
HttpUrl
¤
Bases: RootHttpUrl
flowchart TD
bioimageio.spec.model.v0_5.HttpUrl[HttpUrl]
bioimageio.spec._internal.root_url.RootHttpUrl[RootHttpUrl]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.root_url.RootHttpUrl --> bioimageio.spec.model.v0_5.HttpUrl
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.root_url.RootHttpUrl
click bioimageio.spec.model.v0_5.HttpUrl href "" "bioimageio.spec.model.v0_5.HttpUrl"
click bioimageio.spec._internal.root_url.RootHttpUrl href "" "bioimageio.spec._internal.root_url.RootHttpUrl"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
A URL with the HTTP or HTTPS scheme.
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
absolute |
analog to |
exists |
True if URL is available |
| ATTRIBUTE | DESCRIPTION |
|---|---|
host |
TYPE:
|
parent |
TYPE:
|
parents |
iterate over all URL parents (max 100)
TYPE:
|
path |
TYPE:
|
root_model |
TYPE:
|
scheme |
TYPE:
|
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
absolute
¤
absolute()
analog to absolute method of pathlib.
Source code in src/bioimageio/spec/_internal/root_url.py
18 19 20 | |
exists
¤
exists()
True if URL is available
Source code in src/bioimageio/spec/_internal/url.py
152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 | |
Identifier
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.Identifier[Identifier]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.Identifier
click bioimageio.spec.model.v0_5.Identifier href "" "bioimageio.spec.model.v0_5.Identifier"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
root_model |
TYPE:
|
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
IndexAxisBase
pydantic-model
¤
Bases: AxisBase
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
}
},
"required": [
"type"
],
"title": "model.v0_5.IndexAxisBase",
"type": "object"
}
Fields:
-
description(str) -
type(Literal['index']) -
id(NonBatchAxisId)
description
pydantic-field
¤
description: str = ''
A short description of this axis beyond its type and id.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
IndexInputAxis
pydantic-model
¤
Bases: IndexAxisBase, _WithInputAxisSize
Show JSON schema:
{
"$defs": {
"ParameterizedSize": {
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes (`ParameterizedSize`)\n- reference to another axis with an optional offset (`SizeReference`)",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a `SizeReference` to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.IndexInputAxis",
"type": "object"
}
Fields:
-
size(Union[int, ParameterizedSize, SizeReference]) -
id(NonBatchAxisId) -
description(str) -
type(Literal['index']) -
concatenable(bool)
concatenable
pydantic-field
¤
concatenable: bool = False
If a model has a concatenable input axis, it can be processed blockwise,
splitting a longer sample axis into blocks matching its input tensor description.
Output axes are concatenable if they have a SizeReference to a concatenable
input axis.
description
pydantic-field
¤
description: str = ''
A short description of this axis beyond its type and id.
size
pydantic-field
¤
size: Union[int, ParameterizedSize, SizeReference]
The size/length of this axis can be specified as
- fixed integer
- parameterized series of valid sizes (ParameterizedSize)
- reference to another axis with an optional offset (SizeReference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
IndexOutputAxis
pydantic-model
¤
Bases: IndexAxisBase
Show JSON schema:
{
"$defs": {
"DataDependentSize": {
"additionalProperties": false,
"properties": {
"min": {
"default": 1,
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"max": {
"anyOf": [
{
"exclusiveMinimum": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Max"
}
},
"title": "model.v0_5.DataDependentSize",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
},
{
"$ref": "#/$defs/DataDependentSize"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (`SizeReference`)\n- data dependent size using `DataDependentSize` (size is only known after model inference)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
}
},
"required": [
"type",
"size"
],
"title": "model.v0_5.IndexOutputAxis",
"type": "object"
}
Fields:
-
id(NonBatchAxisId) -
description(str) -
type(Literal['index']) -
size(Union[int, SizeReference, DataDependentSize])
description
pydantic-field
¤
description: str = ''
A short description of this axis beyond its type and id.
size
pydantic-field
¤
size: Union[int, SizeReference, DataDependentSize]
The size/length of this axis can be specified as
- fixed integer
- reference to another axis with an optional offset (SizeReference)
- data dependent size using DataDependentSize (size is only known after model inference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
InputTensorDescr
pydantic-model
¤
Bases: TensorDescrBase[InputAxis]
Show JSON schema:
{
"$defs": {
"BatchAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "batch",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "batch",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"const": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
"title": "Size"
}
},
"required": [
"type"
],
"title": "model.v0_5.BatchAxis",
"type": "object"
},
"BinarizeAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for `BinarizeDescr`",
"properties": {
"threshold": {
"description": "The fixed threshold values along `axis`",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Threshold",
"type": "array"
},
"axis": {
"description": "The `threshold` axis",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"threshold",
"axis"
],
"title": "model.v0_5.BinarizeAlongAxisKwargs",
"type": "object"
},
"BinarizeDescr": {
"additionalProperties": false,
"description": "Binarize the tensor with a fixed threshold.\n\nValues above `BinarizeKwargs.threshold`/`BinarizeAlongAxisKwargs.threshold`\nwill be set to one, values below the threshold to zero.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: binarize\n kwargs:\n axis: 'channel'\n threshold: [0.25, 0.5, 0.75]\n ```\n- in Python:\n >>> postprocessing = [BinarizeDescr(\n ... kwargs=BinarizeAlongAxisKwargs(\n ... axis=AxisId('channel'),\n ... threshold=[0.25, 0.5, 0.75],\n ... )\n ... )]",
"properties": {
"id": {
"const": "binarize",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/BinarizeKwargs"
},
{
"$ref": "#/$defs/BinarizeAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.BinarizeDescr",
"type": "object"
},
"BinarizeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `BinarizeDescr`",
"properties": {
"threshold": {
"description": "The fixed threshold",
"title": "Threshold",
"type": "number"
}
},
"required": [
"threshold"
],
"title": "model.v0_5.BinarizeKwargs",
"type": "object"
},
"ChannelAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "channel",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "channel",
"title": "Type",
"type": "string"
},
"channel_names": {
"items": {
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"minItems": 1,
"title": "Channel Names",
"type": "array"
}
},
"required": [
"type",
"channel_names"
],
"title": "model.v0_5.ChannelAxis",
"type": "object"
},
"ClipDescr": {
"additionalProperties": false,
"description": "Set tensor values below min to min and above max to max.\n\nSee `ScaleRangeDescr` for examples.",
"properties": {
"id": {
"const": "clip",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ClipKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ClipDescr",
"type": "object"
},
"ClipKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ClipDescr`",
"properties": {
"min": {
"description": "minimum value for clipping",
"title": "Min",
"type": "number"
},
"max": {
"description": "maximum value for clipping",
"title": "Max",
"type": "number"
}
},
"required": [
"min",
"max"
],
"title": "model.v0_4.ClipKwargs",
"type": "object"
},
"EnsureDtypeDescr": {
"additionalProperties": false,
"description": "Cast the tensor data type to `EnsureDtypeKwargs.dtype` (if not matching).\n\nThis can for example be used to ensure the inner neural network model gets a\ndifferent input tensor data type than the fully described bioimage.io model does.\n\nExamples:\n The described bioimage.io model (incl. preprocessing) accepts any\n float32-compatible tensor, normalizes it with percentiles and clipping and then\n casts it to uint8, which is what the neural network in this example expects.\n - in YAML\n ```yaml\n inputs:\n - data:\n type: float32 # described bioimage.io model is compatible with any float32 input tensor\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n - id: ensure_dtype # the neural network of the model requires uint8\n kwargs:\n dtype: uint8\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(kwargs=ClipKwargs(min=0.0, max=1.0)),\n ... EnsureDtypeDescr(kwargs=EnsureDtypeKwargs(dtype=\"uint8\")),\n ... ]",
"properties": {
"id": {
"const": "ensure_dtype",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/EnsureDtypeKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.EnsureDtypeDescr",
"type": "object"
},
"EnsureDtypeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `EnsureDtypeDescr`",
"properties": {
"dtype": {
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"title": "Dtype",
"type": "string"
}
},
"required": [
"dtype"
],
"title": "model.v0_5.EnsureDtypeKwargs",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"FixedZeroMeanUnitVarianceAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for `FixedZeroMeanUnitVarianceDescr`",
"properties": {
"mean": {
"description": "The mean value(s) to normalize with.",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Mean",
"type": "array"
},
"std": {
"description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
"items": {
"minimum": 1e-06,
"type": "number"
},
"minItems": 1,
"title": "Std",
"type": "array"
},
"axis": {
"description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
"examples": [
"channel",
"index"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"mean",
"std",
"axis"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
"type": "object"
},
"FixedZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract a given mean and divide by the standard deviation.\n\nNormalize with fixed, precomputed values for\n`FixedZeroMeanUnitVarianceKwargs.mean` and `FixedZeroMeanUnitVarianceKwargs.std`\nUse `FixedZeroMeanUnitVarianceAlongAxisKwargs` for independent scaling along given\naxes.\n\nExamples:\n1. scalar value for whole tensor\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n mean: 103.5\n std: 13.7\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7)\n ... )]\n\n2. independently along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n axis: channel\n mean: [101.5, 102.5, 103.5]\n std: [11.7, 12.7, 13.7]\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... mean=[101.5, 102.5, 103.5],\n ... std=[11.7, 12.7, 13.7],\n ... )\n ... )]",
"properties": {
"id": {
"const": "fixed_zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceKwargs"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceDescr",
"type": "object"
},
"FixedZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `FixedZeroMeanUnitVarianceDescr`",
"properties": {
"mean": {
"description": "The mean value to normalize with.",
"title": "Mean",
"type": "number"
},
"std": {
"description": "The standard deviation value to normalize with.",
"minimum": 1e-06,
"title": "Std",
"type": "number"
}
},
"required": [
"mean",
"std"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
"type": "object"
},
"IndexInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes (`ParameterizedSize`)\n- reference to another axis with an optional offset (`SizeReference`)",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a `SizeReference` to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.IndexInputAxis",
"type": "object"
},
"IntervalOrRatioDataDescr": {
"additionalProperties": false,
"properties": {
"type": {
"default": "float32",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64"
],
"examples": [
"float32",
"float64",
"uint8",
"uint16"
],
"title": "Type",
"type": "string"
},
"range": {
"default": [
null,
null
],
"description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
"maxItems": 2,
"minItems": 2,
"prefixItems": [
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
},
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
}
],
"title": "Range",
"type": "array"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
}
],
"default": "arbitrary unit",
"title": "Unit"
},
"scale": {
"default": 1.0,
"description": "Scale for data on an interval (or ratio) scale.",
"title": "Scale",
"type": "number"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Offset for data on a ratio scale.",
"title": "Offset"
}
},
"title": "model.v0_5.IntervalOrRatioDataDescr",
"type": "object"
},
"NominalOrOrdinalDataDescr": {
"additionalProperties": false,
"properties": {
"values": {
"anyOf": [
{
"items": {
"type": "integer"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "boolean"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "string"
},
"minItems": 1,
"type": "array"
}
],
"description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
"title": "Values"
},
"type": {
"default": "uint8",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"examples": [
"float32",
"uint8",
"uint16",
"int64",
"bool"
],
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
}
},
"required": [
"values"
],
"title": "model.v0_5.NominalOrOrdinalDataDescr",
"type": "object"
},
"ParameterizedSize": {
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"ScaleLinearAlongAxisKwargs": {
"additionalProperties": false,
"description": "Key word arguments for `ScaleLinearDescr`",
"properties": {
"axis": {
"description": "The axis of gain and offset values.",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"gain": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 0.0,
"description": "additive term",
"title": "Offset"
}
},
"required": [
"axis"
],
"title": "model.v0_5.ScaleLinearAlongAxisKwargs",
"type": "object"
},
"ScaleLinearDescr": {
"additionalProperties": false,
"description": "Fixed linear scaling.\n\nExamples:\n 1. Scale with scalar gain and offset\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n gain: 2.0\n offset: 3.0\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0))\n ... ]\n\n 2. Independent scaling along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n axis: 'channel'\n gain: [1.0, 2.0, 3.0]\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(\n ... kwargs=ScaleLinearAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... gain=[1.0, 2.0, 3.0],\n ... )\n ... )\n ... ]",
"properties": {
"id": {
"const": "scale_linear",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/ScaleLinearKwargs"
},
{
"$ref": "#/$defs/ScaleLinearAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ScaleLinearDescr",
"type": "object"
},
"ScaleLinearKwargs": {
"additionalProperties": false,
"description": "Key word arguments for `ScaleLinearDescr`",
"properties": {
"gain": {
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain",
"type": "number"
},
"offset": {
"default": 0.0,
"description": "additive term",
"title": "Offset",
"type": "number"
}
},
"title": "model.v0_5.ScaleLinearKwargs",
"type": "object"
},
"ScaleRangeDescr": {
"additionalProperties": false,
"description": "Scale with percentiles.\n\nExamples:\n1. Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... )\n ... ]\n\n 2. Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: scale_range\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(\n ... kwargs=ClipKwargs(\n ... min=0.0,\n ... max=1.0,\n ... )\n ... ),\n ... ]",
"properties": {
"id": {
"const": "scale_range",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleRangeKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ScaleRangeDescr",
"type": "object"
},
"ScaleRangeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleRangeDescr`\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"min_percentile": {
"default": 0.0,
"description": "The lower percentile used to determine the value to align with zero.",
"exclusiveMaximum": 100,
"minimum": 0,
"title": "Min Percentile",
"type": "number"
},
"max_percentile": {
"default": 100.0,
"description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
"exclusiveMinimum": 1,
"maximum": 100,
"title": "Max Percentile",
"type": "number"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
},
"reference_tensor": {
"anyOf": [
{
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Tensor ID to compute the percentiles from. Default: The tensor itself.\nFor any tensor in `inputs` only input tensor references are allowed.",
"title": "Reference Tensor"
}
},
"title": "model.v0_5.ScaleRangeKwargs",
"type": "object"
},
"SigmoidDescr": {
"additionalProperties": false,
"description": "The logistic sigmoid function, a.k.a. expit function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: sigmoid\n ```\n- in Python:\n >>> postprocessing = [SigmoidDescr()]",
"properties": {
"id": {
"const": "sigmoid",
"title": "Id",
"type": "string"
}
},
"required": [
"id"
],
"title": "model.v0_5.SigmoidDescr",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
},
"SoftmaxDescr": {
"additionalProperties": false,
"description": "The softmax function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: softmax\n kwargs:\n axis: channel\n ```\n- in Python:\n >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId(\"channel\")))]",
"properties": {
"id": {
"const": "softmax",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/SoftmaxKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.SoftmaxDescr",
"type": "object"
},
"SoftmaxKwargs": {
"additionalProperties": false,
"description": "key word arguments for `SoftmaxDescr`",
"properties": {
"axis": {
"default": "channel",
"description": "The axis to apply the softmax function along.\nNote:\n Defaults to 'channel' axis\n (which may not exist, in which case\n a different axis id has to be specified).",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"title": "model.v0_5.SoftmaxKwargs",
"type": "object"
},
"SpaceInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes (`ParameterizedSize`)\n- reference to another axis with an optional offset (`SizeReference`)",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a `SizeReference` to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceInputAxis",
"type": "object"
},
"TimeInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes (`ParameterizedSize`)\n- reference to another axis with an optional offset (`SizeReference`)",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a `SizeReference` to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeInputAxis",
"type": "object"
},
"ZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract mean and divide by variance.\n\nExamples:\n Subtract tensor mean and variance\n - in YAML\n ```yaml\n preprocessing:\n - id: zero_mean_unit_variance\n ```\n - in Python\n >>> preprocessing = [ZeroMeanUnitVarianceDescr()]",
"properties": {
"id": {
"const": "zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ZeroMeanUnitVarianceDescr",
"type": "object"
},
"ZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ZeroMeanUnitVarianceDescr`",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"id": {
"default": "input",
"description": "Input tensor id.\nNo duplicates are allowed across all inputs and outputs.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"description": {
"default": "",
"description": "free text description",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"axes": {
"description": "tensor axes",
"items": {
"discriminator": {
"mapping": {
"batch": "#/$defs/BatchAxis",
"channel": "#/$defs/ChannelAxis",
"index": "#/$defs/IndexInputAxis",
"space": "#/$defs/SpaceInputAxis",
"time": "#/$defs/TimeInputAxis"
},
"propertyName": "type"
},
"oneOf": [
{
"$ref": "#/$defs/BatchAxis"
},
{
"$ref": "#/$defs/ChannelAxis"
},
{
"$ref": "#/$defs/IndexInputAxis"
},
{
"$ref": "#/$defs/TimeInputAxis"
},
{
"$ref": "#/$defs/SpaceInputAxis"
}
]
},
"minItems": 1,
"title": "Axes",
"type": "array"
},
"test_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
},
"sample_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
},
"data": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
},
{
"items": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
}
]
},
"minItems": 1,
"type": "array"
}
],
"default": {
"type": "float32",
"range": [
null,
null
],
"unit": "arbitrary unit",
"scale": 1.0,
"offset": null
},
"description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
"title": "Data"
},
"optional": {
"default": false,
"description": "indicates that this tensor may be `None`",
"title": "Optional",
"type": "boolean"
},
"preprocessing": {
"description": "Description of how this input should be preprocessed.\n\nnotes:\n- If preprocessing does not start with an 'ensure_dtype' entry, it is added\n to ensure an input tensor's data type matches the input tensor's data description.\n- If preprocessing does not end with an 'ensure_dtype' or 'binarize' entry, an\n 'ensure_dtype' step is added to ensure preprocessing steps are not unintentionally\n changing the data type.",
"items": {
"discriminator": {
"mapping": {
"binarize": "#/$defs/BinarizeDescr",
"clip": "#/$defs/ClipDescr",
"ensure_dtype": "#/$defs/EnsureDtypeDescr",
"fixed_zero_mean_unit_variance": "#/$defs/FixedZeroMeanUnitVarianceDescr",
"scale_linear": "#/$defs/ScaleLinearDescr",
"scale_range": "#/$defs/ScaleRangeDescr",
"sigmoid": "#/$defs/SigmoidDescr",
"softmax": "#/$defs/SoftmaxDescr",
"zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
},
"propertyName": "id"
},
"oneOf": [
{
"$ref": "#/$defs/BinarizeDescr"
},
{
"$ref": "#/$defs/ClipDescr"
},
{
"$ref": "#/$defs/EnsureDtypeDescr"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceDescr"
},
{
"$ref": "#/$defs/ScaleLinearDescr"
},
{
"$ref": "#/$defs/ScaleRangeDescr"
},
{
"$ref": "#/$defs/SigmoidDescr"
},
{
"$ref": "#/$defs/SoftmaxDescr"
},
{
"$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
}
]
},
"title": "Preprocessing",
"type": "array"
}
},
"required": [
"axes"
],
"title": "model.v0_5.InputTensorDescr",
"type": "object"
}
Fields:
-
description(str) -
axes(NotEmpty[Sequence[IO_AxisT]]) -
test_tensor(FAIR[Optional[FileDescr_]]) -
sample_tensor(FAIR[Optional[FileDescr_]]) -
data(Union[TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]]) -
id(TensorId) -
optional(bool) -
preprocessing(List[PreprocessingDescr])
Validators:
-
_validate_axes→axes -
_validate_sample_tensor -
_check_data_type_across_channels→data -
_check_data_matches_channelaxis -
_validate_preprocessing_kwargs
data
pydantic-field
¤
data: Union[
TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]
]
Description of the tensor's data values, optionally per channel.
If specified per channel, the data type needs to match across channels.
dtype
property
¤
dtype: Literal[
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool",
]
dtype as specified under data.type or data[i].type
id
pydantic-field
¤
id: TensorId
Input tensor id. No duplicates are allowed across all inputs and outputs.
preprocessing
pydantic-field
¤
preprocessing: List[PreprocessingDescr]
Description of how this input should be preprocessed.
notes: - If preprocessing does not start with an 'ensure_dtype' entry, it is added to ensure an input tensor's data type matches the input tensor's data description. - If preprocessing does not end with an 'ensure_dtype' or 'binarize' entry, an 'ensure_dtype' step is added to ensure preprocessing steps are not unintentionally changing the data type.
sample_tensor
pydantic-field
¤
sample_tensor: FAIR[Optional[FileDescr_]] = None
A sample tensor to illustrate a possible input/output for the model,
The sample image primarily serves to inform a human user about an example use case
and is typically stored as .hdf5, .png or .tiff.
It has to be readable by the imageio library
(numpy's .npy format is not supported).
The image dimensionality has to match the number of axes specified in this tensor description.
test_tensor
pydantic-field
¤
test_tensor: FAIR[Optional[FileDescr_]] = None
An example tensor to use for testing. Using the model with the test input tensors is expected to yield the test output tensors. Each test tensor has be a an ndarray in the numpy.lib file format. The file extension must be '.npy'.
get_axis_sizes_for_array
¤
get_axis_sizes_for_array(
array: NDArray[Any],
) -> Dict[AxisId, int]
Source code in src/bioimageio/spec/model/v0_5.py
1685 1686 1687 1688 1689 1690 1691 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
IntervalOrRatioDataDescr
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"type": {
"default": "float32",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64"
],
"examples": [
"float32",
"float64",
"uint8",
"uint16"
],
"title": "Type",
"type": "string"
},
"range": {
"default": [
null,
null
],
"description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
"maxItems": 2,
"minItems": 2,
"prefixItems": [
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
},
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
}
],
"title": "Range",
"type": "array"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
}
],
"default": "arbitrary unit",
"title": "Unit"
},
"scale": {
"default": 1.0,
"description": "Scale for data on an interval (or ratio) scale.",
"title": "Scale",
"type": "number"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Offset for data on a ratio scale.",
"title": "Offset"
}
},
"title": "model.v0_5.IntervalOrRatioDataDescr",
"type": "object"
}
Fields:
-
type(IntervalOrRatioDType) -
range(Tuple[Optional[float], Optional[float]]) -
unit(Union[Literal['arbitrary unit'], SiUnit]) -
scale(float) -
offset(Optional[float])
Validators:
-
_replace_inf
range
pydantic-field
¤
range: Tuple[Optional[float], Optional[float]] = (
None,
None,
)
Tuple (minimum, maximum) specifying the allowed range of the data in this tensor.
None corresponds to min/max of what can be expressed by type.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
InvalidDescr
pydantic-model
¤
Bases: ResourceDescrBase
A representation of an invalid resource description
Show JSON schema:
{
"additionalProperties": true,
"description": "A representation of an invalid resource description",
"properties": {
"type": {
"title": "Type"
},
"format_version": {
"title": "Format Version"
}
},
"required": [
"type",
"format_version"
],
"title": "An invalid resource description",
"type": "object"
}
Fields:
-
_validation_summary(Optional[ValidationSummary]) -
_root(Union[RootHttpUrl, DirectoryPath, ZipFile]) -
_file_name(Optional[FileName]) -
type(Any) -
format_version(Any)
file_name
property
¤
file_name: Optional[FileName]
File name of the bioimageio.yaml file the description was loaded from.
implemented_format_version
class-attribute
¤
implemented_format_version: Literal['unknown'] = 'unknown'
implemented_format_version_tuple
class-attribute
¤
implemented_format_version_tuple: Tuple[int, int, int]
root
property
¤
root: Union[RootHttpUrl, DirectoryPath, ZipFile]
The URL/Path prefix to resolve any relative paths with.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any)
Source code in src/bioimageio/spec/_internal/common_nodes.py
199 200 201 202 203 204 205 206 207 208 209 210 211 | |
get_package_content
¤
get_package_content() -> Dict[
FileName, Union[FileDescr, BioimageioYamlContent]
]
Returns package content without creating the package.
Source code in src/bioimageio/spec/_internal/common_nodes.py
377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 | |
load
classmethod
¤
load(
data: BioimageioYamlContentView,
context: Optional[ValidationContext] = None,
) -> Union[Self, InvalidDescr]
factory method to create a resource description object
Source code in src/bioimageio/spec/_internal/common_nodes.py
213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
package
¤
package(
dest: Optional[
Union[ZipFile, IO[bytes], Path, str]
] = None,
) -> ZipFile
package the described resource as a zip archive
| PARAMETER | DESCRIPTION |
|---|---|
|
(path/bytes stream of) destination zipfile
TYPE:
|
Source code in src/bioimageio/spec/_internal/common_nodes.py
347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 | |
KerasHdf5WeightsDescr
pydantic-model
¤
Bases: WeightsEntryDescrBase
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "TensorFlow version used to create these weights."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.KerasHdf5WeightsDescr",
"type": "object"
}
Fields:
-
source(FileSource) -
sha256(Optional[Sha256]) -
authors(Optional[List[Author]]) -
parent(Optional[WeightsFormat]) -
comment(str) -
tensorflow_version(Version)
Validators:
-
_validate
authors
pydantic-field
¤
authors: Optional[List[Author]] = None
Authors
Either the person(s) that have trained this model resulting in the original weights file.
(If this is the initial weights entry, i.e. it does not have a parent)
Or the person(s) who have converted the weights to this weights format.
(If this is a child weight, i.e. it has a parent field)
comment
pydantic-field
¤
comment: str = ''
A comment about this weights entry, for example how these weights were created.
parent
pydantic-field
¤
parent: Optional[WeightsFormat] = None
The source weights these weights were converted from.
For example, if a model's weights were converted from the pytorch_state_dict format to torchscript,
The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights.
All weight entries except one (the initial set of weights resulting from training the model),
need to have this field.
tensorflow_version
pydantic-field
¤
tensorflow_version: Version
TensorFlow version used to create these weights.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
306 307 308 309 310 311 312 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
298 299 300 301 302 303 304 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 | |
LicenseId
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.LicenseId[LicenseId]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.LicenseId
click bioimageio.spec.model.v0_5.LicenseId href "" "bioimageio.spec.model.v0_5.LicenseId"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
root_model |
TYPE:
|
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
LinkedDataset
pydantic-model
¤
Bases: LinkedResourceBase
Reference to a bioimage.io dataset.
Show JSON schema:
{
"$defs": {
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"description": "Reference to a bioimage.io dataset.",
"properties": {
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the linked resource following SemVer 2.0."
},
"id": {
"description": "A valid dataset `id` from the bioimage.io collection.",
"minLength": 1,
"title": "DatasetId",
"type": "string"
}
},
"required": [
"id"
],
"title": "dataset.v0_3.LinkedDataset",
"type": "object"
}
Fields:
version
pydantic-field
¤
version: Optional[Version] = None
The version of the linked resource following SemVer 2.0.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
LinkedDataset02
pydantic-model
¤
Bases: Node
Reference to a bioimage.io dataset.
Show JSON schema:
{
"additionalProperties": false,
"description": "Reference to a bioimage.io dataset.",
"properties": {
"id": {
"description": "A valid dataset `id` from the bioimage.io collection.",
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
"version_number": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "version number (n-th published version, not the semantic version) of linked dataset",
"title": "Version Number"
}
},
"required": [
"id"
],
"title": "dataset.v0_2.LinkedDataset",
"type": "object"
}
Fields:
-
id(DatasetId) -
version_number(Optional[int])
version_number
pydantic-field
¤
version_number: Optional[int] = None
version number (n-th published version, not the semantic version) of linked dataset
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
LinkedModel
pydantic-model
¤
Bases: LinkedResourceBase
Reference to a bioimage.io model.
Show JSON schema:
{
"$defs": {
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"description": "Reference to a bioimage.io model.",
"properties": {
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the linked resource following SemVer 2.0."
},
"id": {
"description": "A valid model `id` from the bioimage.io collection.",
"minLength": 1,
"title": "ModelId",
"type": "string"
}
},
"required": [
"id"
],
"title": "model.v0_5.LinkedModel",
"type": "object"
}
Fields:
version
pydantic-field
¤
version: Optional[Version] = None
The version of the linked resource following SemVer 2.0.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
LinkedResource
pydantic-model
¤
Bases: LinkedResourceBase
Reference to a bioimage.io resource
Show JSON schema:
{
"$defs": {
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"description": "Reference to a bioimage.io resource",
"properties": {
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the linked resource following SemVer 2.0."
},
"id": {
"description": "A valid resource `id` from the official bioimage.io collection.",
"minLength": 1,
"title": "ResourceId",
"type": "string"
}
},
"required": [
"id"
],
"title": "generic.v0_3.LinkedResource",
"type": "object"
}
Fields:
-
version(Optional[Version]) -
id(ResourceId)
version
pydantic-field
¤
version: Optional[Version] = None
The version of the linked resource following SemVer 2.0.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
LinkedResourceBase
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"$defs": {
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"properties": {
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the linked resource following SemVer 2.0."
}
},
"title": "generic.v0_3.LinkedResourceBase",
"type": "object"
}
Fields:
version
pydantic-field
¤
version: Optional[Version] = None
The version of the linked resource following SemVer 2.0.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
LowerCaseIdentifier
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.LowerCaseIdentifier[LowerCaseIdentifier]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.LowerCaseIdentifier
click bioimageio.spec.model.v0_5.LowerCaseIdentifier href "" "bioimageio.spec.model.v0_5.LowerCaseIdentifier"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
root_model |
TYPE:
|
root_model
class-attribute
¤
root_model: Type[RootModel[Any]] = RootModel[
LowerCaseIdentifierAnno
]
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
Maintainer
pydantic-model
¤
Bases: _Maintainer_v0_2
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Name"
},
"github_user": {
"title": "Github User",
"type": "string"
}
},
"required": [
"github_user"
],
"title": "generic.v0_3.Maintainer",
"type": "object"
}
Fields:
-
affiliation(Optional[str]) -
email(Optional[EmailStr]) -
orcid(Optional[OrcidId]) -
name(Optional[str]) -
github_user(str)
Validators:
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
validate_github_user
pydantic-validator
¤
validate_github_user(value: str)
Source code in src/bioimageio/spec/generic/v0_3.py
140 141 142 | |
ModelDescr
pydantic-model
¤
Bases: GenericModelDescrBase
Specification of the fields used in a bioimage.io-compliant RDF to describe AI models with pretrained weights. These fields are typically stored in a YAML file which we call a model resource description file (model RDF).
Show JSON schema:
{
"$defs": {
"ArchitectureFromFileDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Architecture source file",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
}
},
"required": [
"source",
"callable"
],
"title": "model.v0_5.ArchitectureFromFileDescr",
"type": "object"
},
"ArchitectureFromLibraryDescr": {
"additionalProperties": false,
"properties": {
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
},
"import_from": {
"description": "Where to import the callable from, i.e. `from <import_from> import <callable>`",
"title": "Import From",
"type": "string"
}
},
"required": [
"callable",
"import_from"
],
"title": "model.v0_5.ArchitectureFromLibraryDescr",
"type": "object"
},
"AttachmentsDescr": {
"additionalProperties": true,
"properties": {
"files": {
"description": "File attachments",
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Files",
"type": "array"
}
},
"title": "generic.v0_2.AttachmentsDescr",
"type": "object"
},
"BadgeDescr": {
"additionalProperties": false,
"description": "A custom badge",
"properties": {
"label": {
"description": "badge label to display on hover",
"examples": [
"Open in Colab"
],
"title": "Label",
"type": "string"
},
"icon": {
"anyOf": [
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "badge icon (included in bioimage.io package if not a URL)",
"examples": [
"https://colab.research.google.com/assets/colab-badge.svg"
],
"title": "Icon"
},
"url": {
"description": "target URL",
"examples": [
"https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb"
],
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
}
},
"required": [
"label",
"url"
],
"title": "generic.v0_2.BadgeDescr",
"type": "object"
},
"BatchAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "batch",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "batch",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"const": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
"title": "Size"
}
},
"required": [
"type"
],
"title": "model.v0_5.BatchAxis",
"type": "object"
},
"BinarizeAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for `BinarizeDescr`",
"properties": {
"threshold": {
"description": "The fixed threshold values along `axis`",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Threshold",
"type": "array"
},
"axis": {
"description": "The `threshold` axis",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"threshold",
"axis"
],
"title": "model.v0_5.BinarizeAlongAxisKwargs",
"type": "object"
},
"BinarizeDescr": {
"additionalProperties": false,
"description": "Binarize the tensor with a fixed threshold.\n\nValues above `BinarizeKwargs.threshold`/`BinarizeAlongAxisKwargs.threshold`\nwill be set to one, values below the threshold to zero.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: binarize\n kwargs:\n axis: 'channel'\n threshold: [0.25, 0.5, 0.75]\n ```\n- in Python:\n >>> postprocessing = [BinarizeDescr(\n ... kwargs=BinarizeAlongAxisKwargs(\n ... axis=AxisId('channel'),\n ... threshold=[0.25, 0.5, 0.75],\n ... )\n ... )]",
"properties": {
"id": {
"const": "binarize",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/BinarizeKwargs"
},
{
"$ref": "#/$defs/BinarizeAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.BinarizeDescr",
"type": "object"
},
"BinarizeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `BinarizeDescr`",
"properties": {
"threshold": {
"description": "The fixed threshold",
"title": "Threshold",
"type": "number"
}
},
"required": [
"threshold"
],
"title": "model.v0_5.BinarizeKwargs",
"type": "object"
},
"ChannelAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "channel",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "channel",
"title": "Type",
"type": "string"
},
"channel_names": {
"items": {
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"minItems": 1,
"title": "Channel Names",
"type": "array"
}
},
"required": [
"type",
"channel_names"
],
"title": "model.v0_5.ChannelAxis",
"type": "object"
},
"ClipDescr": {
"additionalProperties": false,
"description": "Set tensor values below min to min and above max to max.\n\nSee `ScaleRangeDescr` for examples.",
"properties": {
"id": {
"const": "clip",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ClipKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ClipDescr",
"type": "object"
},
"ClipKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ClipDescr`",
"properties": {
"min": {
"description": "minimum value for clipping",
"title": "Min",
"type": "number"
},
"max": {
"description": "maximum value for clipping",
"title": "Max",
"type": "number"
}
},
"required": [
"min",
"max"
],
"title": "model.v0_4.ClipKwargs",
"type": "object"
},
"DataDependentSize": {
"additionalProperties": false,
"properties": {
"min": {
"default": 1,
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"max": {
"anyOf": [
{
"exclusiveMinimum": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Max"
}
},
"title": "model.v0_5.DataDependentSize",
"type": "object"
},
"Datetime": {
"description": "Timestamp in [ISO 8601](#https://en.wikipedia.org/wiki/ISO_8601) format\nwith a few restrictions listed [here](https://docs.python.org/3/library/datetime.html#datetime.datetime.fromisoformat).",
"format": "date-time",
"title": "Datetime",
"type": "string"
},
"EnsureDtypeDescr": {
"additionalProperties": false,
"description": "Cast the tensor data type to `EnsureDtypeKwargs.dtype` (if not matching).\n\nThis can for example be used to ensure the inner neural network model gets a\ndifferent input tensor data type than the fully described bioimage.io model does.\n\nExamples:\n The described bioimage.io model (incl. preprocessing) accepts any\n float32-compatible tensor, normalizes it with percentiles and clipping and then\n casts it to uint8, which is what the neural network in this example expects.\n - in YAML\n ```yaml\n inputs:\n - data:\n type: float32 # described bioimage.io model is compatible with any float32 input tensor\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n - id: ensure_dtype # the neural network of the model requires uint8\n kwargs:\n dtype: uint8\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(kwargs=ClipKwargs(min=0.0, max=1.0)),\n ... EnsureDtypeDescr(kwargs=EnsureDtypeKwargs(dtype=\"uint8\")),\n ... ]",
"properties": {
"id": {
"const": "ensure_dtype",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/EnsureDtypeKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.EnsureDtypeDescr",
"type": "object"
},
"EnsureDtypeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `EnsureDtypeDescr`",
"properties": {
"dtype": {
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"title": "Dtype",
"type": "string"
}
},
"required": [
"dtype"
],
"title": "model.v0_5.EnsureDtypeKwargs",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"FixedZeroMeanUnitVarianceAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for `FixedZeroMeanUnitVarianceDescr`",
"properties": {
"mean": {
"description": "The mean value(s) to normalize with.",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Mean",
"type": "array"
},
"std": {
"description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
"items": {
"minimum": 1e-06,
"type": "number"
},
"minItems": 1,
"title": "Std",
"type": "array"
},
"axis": {
"description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
"examples": [
"channel",
"index"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"mean",
"std",
"axis"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
"type": "object"
},
"FixedZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract a given mean and divide by the standard deviation.\n\nNormalize with fixed, precomputed values for\n`FixedZeroMeanUnitVarianceKwargs.mean` and `FixedZeroMeanUnitVarianceKwargs.std`\nUse `FixedZeroMeanUnitVarianceAlongAxisKwargs` for independent scaling along given\naxes.\n\nExamples:\n1. scalar value for whole tensor\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n mean: 103.5\n std: 13.7\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7)\n ... )]\n\n2. independently along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n axis: channel\n mean: [101.5, 102.5, 103.5]\n std: [11.7, 12.7, 13.7]\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... mean=[101.5, 102.5, 103.5],\n ... std=[11.7, 12.7, 13.7],\n ... )\n ... )]",
"properties": {
"id": {
"const": "fixed_zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceKwargs"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceDescr",
"type": "object"
},
"FixedZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `FixedZeroMeanUnitVarianceDescr`",
"properties": {
"mean": {
"description": "The mean value to normalize with.",
"title": "Mean",
"type": "number"
},
"std": {
"description": "The standard deviation value to normalize with.",
"minimum": 1e-06,
"title": "Std",
"type": "number"
}
},
"required": [
"mean",
"std"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
"type": "object"
},
"IndexInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes (`ParameterizedSize`)\n- reference to another axis with an optional offset (`SizeReference`)",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a `SizeReference` to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.IndexInputAxis",
"type": "object"
},
"IndexOutputAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
},
{
"$ref": "#/$defs/DataDependentSize"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (`SizeReference`)\n- data dependent size using `DataDependentSize` (size is only known after model inference)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
}
},
"required": [
"type",
"size"
],
"title": "model.v0_5.IndexOutputAxis",
"type": "object"
},
"InputTensorDescr": {
"additionalProperties": false,
"properties": {
"id": {
"default": "input",
"description": "Input tensor id.\nNo duplicates are allowed across all inputs and outputs.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"description": {
"default": "",
"description": "free text description",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"axes": {
"description": "tensor axes",
"items": {
"discriminator": {
"mapping": {
"batch": "#/$defs/BatchAxis",
"channel": "#/$defs/ChannelAxis",
"index": "#/$defs/IndexInputAxis",
"space": "#/$defs/SpaceInputAxis",
"time": "#/$defs/TimeInputAxis"
},
"propertyName": "type"
},
"oneOf": [
{
"$ref": "#/$defs/BatchAxis"
},
{
"$ref": "#/$defs/ChannelAxis"
},
{
"$ref": "#/$defs/IndexInputAxis"
},
{
"$ref": "#/$defs/TimeInputAxis"
},
{
"$ref": "#/$defs/SpaceInputAxis"
}
]
},
"minItems": 1,
"title": "Axes",
"type": "array"
},
"test_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
},
"sample_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
},
"data": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
},
{
"items": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
}
]
},
"minItems": 1,
"type": "array"
}
],
"default": {
"type": "float32",
"range": [
null,
null
],
"unit": "arbitrary unit",
"scale": 1.0,
"offset": null
},
"description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
"title": "Data"
},
"optional": {
"default": false,
"description": "indicates that this tensor may be `None`",
"title": "Optional",
"type": "boolean"
},
"preprocessing": {
"description": "Description of how this input should be preprocessed.\n\nnotes:\n- If preprocessing does not start with an 'ensure_dtype' entry, it is added\n to ensure an input tensor's data type matches the input tensor's data description.\n- If preprocessing does not end with an 'ensure_dtype' or 'binarize' entry, an\n 'ensure_dtype' step is added to ensure preprocessing steps are not unintentionally\n changing the data type.",
"items": {
"discriminator": {
"mapping": {
"binarize": "#/$defs/BinarizeDescr",
"clip": "#/$defs/ClipDescr",
"ensure_dtype": "#/$defs/EnsureDtypeDescr",
"fixed_zero_mean_unit_variance": "#/$defs/FixedZeroMeanUnitVarianceDescr",
"scale_linear": "#/$defs/ScaleLinearDescr",
"scale_range": "#/$defs/ScaleRangeDescr",
"sigmoid": "#/$defs/SigmoidDescr",
"softmax": "#/$defs/SoftmaxDescr",
"zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
},
"propertyName": "id"
},
"oneOf": [
{
"$ref": "#/$defs/BinarizeDescr"
},
{
"$ref": "#/$defs/ClipDescr"
},
{
"$ref": "#/$defs/EnsureDtypeDescr"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceDescr"
},
{
"$ref": "#/$defs/ScaleLinearDescr"
},
{
"$ref": "#/$defs/ScaleRangeDescr"
},
{
"$ref": "#/$defs/SigmoidDescr"
},
{
"$ref": "#/$defs/SoftmaxDescr"
},
{
"$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
}
]
},
"title": "Preprocessing",
"type": "array"
}
},
"required": [
"axes"
],
"title": "model.v0_5.InputTensorDescr",
"type": "object"
},
"IntervalOrRatioDataDescr": {
"additionalProperties": false,
"properties": {
"type": {
"default": "float32",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64"
],
"examples": [
"float32",
"float64",
"uint8",
"uint16"
],
"title": "Type",
"type": "string"
},
"range": {
"default": [
null,
null
],
"description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
"maxItems": 2,
"minItems": 2,
"prefixItems": [
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
},
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
}
],
"title": "Range",
"type": "array"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
}
],
"default": "arbitrary unit",
"title": "Unit"
},
"scale": {
"default": 1.0,
"description": "Scale for data on an interval (or ratio) scale.",
"title": "Scale",
"type": "number"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Offset for data on a ratio scale.",
"title": "Offset"
}
},
"title": "model.v0_5.IntervalOrRatioDataDescr",
"type": "object"
},
"KerasHdf5WeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "TensorFlow version used to create these weights."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.KerasHdf5WeightsDescr",
"type": "object"
},
"LinkedDataset": {
"additionalProperties": false,
"description": "Reference to a bioimage.io dataset.",
"properties": {
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the linked resource following SemVer 2.0."
},
"id": {
"description": "A valid dataset `id` from the bioimage.io collection.",
"minLength": 1,
"title": "DatasetId",
"type": "string"
}
},
"required": [
"id"
],
"title": "dataset.v0_3.LinkedDataset",
"type": "object"
},
"LinkedModel": {
"additionalProperties": false,
"description": "Reference to a bioimage.io model.",
"properties": {
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the linked resource following SemVer 2.0."
},
"id": {
"description": "A valid model `id` from the bioimage.io collection.",
"minLength": 1,
"title": "ModelId",
"type": "string"
}
},
"required": [
"id"
],
"title": "model.v0_5.LinkedModel",
"type": "object"
},
"NominalOrOrdinalDataDescr": {
"additionalProperties": false,
"properties": {
"values": {
"anyOf": [
{
"items": {
"type": "integer"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "boolean"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "string"
},
"minItems": 1,
"type": "array"
}
],
"description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
"title": "Values"
},
"type": {
"default": "uint8",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"examples": [
"float32",
"uint8",
"uint16",
"int64",
"bool"
],
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
}
},
"required": [
"values"
],
"title": "model.v0_5.NominalOrOrdinalDataDescr",
"type": "object"
},
"OnnxWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"opset_version": {
"description": "ONNX opset version",
"minimum": 7,
"title": "Opset Version",
"type": "integer"
},
"external_data": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "weights.onnx.data"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Source of the external ONNX data file holding the weights.\n(If present **source** holds the ONNX architecture without weights)."
}
},
"required": [
"source",
"opset_version"
],
"title": "model.v0_5.OnnxWeightsDescr",
"type": "object"
},
"OutputTensorDescr": {
"additionalProperties": false,
"properties": {
"id": {
"default": "output",
"description": "Output tensor id.\nNo duplicates are allowed across all inputs and outputs.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"description": {
"default": "",
"description": "free text description",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"axes": {
"description": "tensor axes",
"items": {
"discriminator": {
"mapping": {
"batch": "#/$defs/BatchAxis",
"channel": "#/$defs/ChannelAxis",
"index": "#/$defs/IndexOutputAxis",
"space": {
"oneOf": [
{
"$ref": "#/$defs/SpaceOutputAxis"
},
{
"$ref": "#/$defs/SpaceOutputAxisWithHalo"
}
]
},
"time": {
"oneOf": [
{
"$ref": "#/$defs/TimeOutputAxis"
},
{
"$ref": "#/$defs/TimeOutputAxisWithHalo"
}
]
}
},
"propertyName": "type"
},
"oneOf": [
{
"$ref": "#/$defs/BatchAxis"
},
{
"$ref": "#/$defs/ChannelAxis"
},
{
"$ref": "#/$defs/IndexOutputAxis"
},
{
"oneOf": [
{
"$ref": "#/$defs/TimeOutputAxis"
},
{
"$ref": "#/$defs/TimeOutputAxisWithHalo"
}
]
},
{
"oneOf": [
{
"$ref": "#/$defs/SpaceOutputAxis"
},
{
"$ref": "#/$defs/SpaceOutputAxisWithHalo"
}
]
}
]
},
"minItems": 1,
"title": "Axes",
"type": "array"
},
"test_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
},
"sample_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
},
"data": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
},
{
"items": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
}
]
},
"minItems": 1,
"type": "array"
}
],
"default": {
"type": "float32",
"range": [
null,
null
],
"unit": "arbitrary unit",
"scale": 1.0,
"offset": null
},
"description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
"title": "Data"
},
"postprocessing": {
"description": "Description of how this output should be postprocessed.\n\nnote: `postprocessing` always ends with an 'ensure_dtype' operation.\n If not given this is added to cast to this tensor's `data.type`.",
"items": {
"discriminator": {
"mapping": {
"binarize": "#/$defs/BinarizeDescr",
"clip": "#/$defs/ClipDescr",
"ensure_dtype": "#/$defs/EnsureDtypeDescr",
"fixed_zero_mean_unit_variance": "#/$defs/FixedZeroMeanUnitVarianceDescr",
"scale_linear": "#/$defs/ScaleLinearDescr",
"scale_mean_variance": "#/$defs/ScaleMeanVarianceDescr",
"scale_range": "#/$defs/ScaleRangeDescr",
"sigmoid": "#/$defs/SigmoidDescr",
"softmax": "#/$defs/SoftmaxDescr",
"zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
},
"propertyName": "id"
},
"oneOf": [
{
"$ref": "#/$defs/BinarizeDescr"
},
{
"$ref": "#/$defs/ClipDescr"
},
{
"$ref": "#/$defs/EnsureDtypeDescr"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceDescr"
},
{
"$ref": "#/$defs/ScaleLinearDescr"
},
{
"$ref": "#/$defs/ScaleMeanVarianceDescr"
},
{
"$ref": "#/$defs/ScaleRangeDescr"
},
{
"$ref": "#/$defs/SigmoidDescr"
},
{
"$ref": "#/$defs/SoftmaxDescr"
},
{
"$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
}
]
},
"title": "Postprocessing",
"type": "array"
}
},
"required": [
"axes"
],
"title": "model.v0_5.OutputTensorDescr",
"type": "object"
},
"ParameterizedSize": {
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
},
"PytorchStateDictWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"architecture": {
"anyOf": [
{
"$ref": "#/$defs/ArchitectureFromFileDescr"
},
{
"$ref": "#/$defs/ArchitectureFromLibraryDescr"
}
],
"title": "Architecture"
},
"pytorch_version": {
"$ref": "#/$defs/Version",
"description": "Version of the PyTorch library used.\nIf `architecture.depencencies` is specified it has to include pytorch and any version pinning has to be compatible."
},
"dependencies": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "environment.yaml"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Custom depencies beyond pytorch described in a Conda environment file.\nAllows to specify custom dependencies, see conda docs:\n- [Exporting an environment file across platforms](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#exporting-an-environment-file-across-platforms)\n- [Creating an environment file manually](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually)\n\nThe conda environment file should include pytorch and any version pinning has to be compatible with\n**pytorch_version**."
}
},
"required": [
"source",
"architecture",
"pytorch_version"
],
"title": "model.v0_5.PytorchStateDictWeightsDescr",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"ReproducibilityTolerance": {
"additionalProperties": true,
"description": "Describes what small numerical differences -- if any -- may be tolerated\nin the generated output when executing in different environments.\n\nA tensor element *output* is considered mismatched to the **test_tensor** if\nabs(*output* - **test_tensor**) > **absolute_tolerance** + **relative_tolerance** * abs(**test_tensor**).\n(Internally we call [numpy.testing.assert_allclose](https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html).)\n\nMotivation:\n For testing we can request the respective deep learning frameworks to be as\n reproducible as possible by setting seeds and chosing deterministic algorithms,\n but differences in operating systems, available hardware and installed drivers\n may still lead to numerical differences.",
"properties": {
"relative_tolerance": {
"default": 0.001,
"description": "Maximum relative tolerance of reproduced test tensor.",
"maximum": 0.01,
"minimum": 0,
"title": "Relative Tolerance",
"type": "number"
},
"absolute_tolerance": {
"default": 0.0001,
"description": "Maximum absolute tolerance of reproduced test tensor.",
"minimum": 0,
"title": "Absolute Tolerance",
"type": "number"
},
"mismatched_elements_per_million": {
"default": 100,
"description": "Maximum number of mismatched elements/pixels per million to tolerate.",
"maximum": 1000,
"minimum": 0,
"title": "Mismatched Elements Per Million",
"type": "integer"
},
"output_ids": {
"default": [],
"description": "Limits the output tensor IDs these reproducibility details apply to.",
"items": {
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"title": "Output Ids",
"type": "array"
},
"weights_formats": {
"default": [],
"description": "Limits the weights formats these details apply to.",
"items": {
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
"title": "Weights Formats",
"type": "array"
}
},
"title": "model.v0_5.ReproducibilityTolerance",
"type": "object"
},
"RunMode": {
"additionalProperties": false,
"properties": {
"name": {
"anyOf": [
{
"const": "deepimagej",
"type": "string"
},
{
"type": "string"
}
],
"description": "Run mode name",
"title": "Name"
},
"kwargs": {
"additionalProperties": true,
"description": "Run mode specific key word arguments",
"title": "Kwargs",
"type": "object"
}
},
"required": [
"name"
],
"title": "model.v0_4.RunMode",
"type": "object"
},
"ScaleLinearAlongAxisKwargs": {
"additionalProperties": false,
"description": "Key word arguments for `ScaleLinearDescr`",
"properties": {
"axis": {
"description": "The axis of gain and offset values.",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"gain": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 0.0,
"description": "additive term",
"title": "Offset"
}
},
"required": [
"axis"
],
"title": "model.v0_5.ScaleLinearAlongAxisKwargs",
"type": "object"
},
"ScaleLinearDescr": {
"additionalProperties": false,
"description": "Fixed linear scaling.\n\nExamples:\n 1. Scale with scalar gain and offset\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n gain: 2.0\n offset: 3.0\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0))\n ... ]\n\n 2. Independent scaling along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n axis: 'channel'\n gain: [1.0, 2.0, 3.0]\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(\n ... kwargs=ScaleLinearAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... gain=[1.0, 2.0, 3.0],\n ... )\n ... )\n ... ]",
"properties": {
"id": {
"const": "scale_linear",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/ScaleLinearKwargs"
},
{
"$ref": "#/$defs/ScaleLinearAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ScaleLinearDescr",
"type": "object"
},
"ScaleLinearKwargs": {
"additionalProperties": false,
"description": "Key word arguments for `ScaleLinearDescr`",
"properties": {
"gain": {
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain",
"type": "number"
},
"offset": {
"default": 0.0,
"description": "additive term",
"title": "Offset",
"type": "number"
}
},
"title": "model.v0_5.ScaleLinearKwargs",
"type": "object"
},
"ScaleMeanVarianceDescr": {
"additionalProperties": false,
"description": "Scale a tensor's data distribution to match another tensor's mean/std.\n`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
"properties": {
"id": {
"const": "scale_mean_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleMeanVarianceKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ScaleMeanVarianceDescr",
"type": "object"
},
"ScaleMeanVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleMeanVarianceKwargs`",
"properties": {
"reference_tensor": {
"description": "Name of tensor to match.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability:\n`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"reference_tensor"
],
"title": "model.v0_5.ScaleMeanVarianceKwargs",
"type": "object"
},
"ScaleRangeDescr": {
"additionalProperties": false,
"description": "Scale with percentiles.\n\nExamples:\n1. Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... )\n ... ]\n\n 2. Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: scale_range\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(\n ... kwargs=ClipKwargs(\n ... min=0.0,\n ... max=1.0,\n ... )\n ... ),\n ... ]",
"properties": {
"id": {
"const": "scale_range",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleRangeKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ScaleRangeDescr",
"type": "object"
},
"ScaleRangeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleRangeDescr`\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"min_percentile": {
"default": 0.0,
"description": "The lower percentile used to determine the value to align with zero.",
"exclusiveMaximum": 100,
"minimum": 0,
"title": "Min Percentile",
"type": "number"
},
"max_percentile": {
"default": 100.0,
"description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
"exclusiveMinimum": 1,
"maximum": 100,
"title": "Max Percentile",
"type": "number"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
},
"reference_tensor": {
"anyOf": [
{
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Tensor ID to compute the percentiles from. Default: The tensor itself.\nFor any tensor in `inputs` only input tensor references are allowed.",
"title": "Reference Tensor"
}
},
"title": "model.v0_5.ScaleRangeKwargs",
"type": "object"
},
"SigmoidDescr": {
"additionalProperties": false,
"description": "The logistic sigmoid function, a.k.a. expit function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: sigmoid\n ```\n- in Python:\n >>> postprocessing = [SigmoidDescr()]",
"properties": {
"id": {
"const": "sigmoid",
"title": "Id",
"type": "string"
}
},
"required": [
"id"
],
"title": "model.v0_5.SigmoidDescr",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
},
"SoftmaxDescr": {
"additionalProperties": false,
"description": "The softmax function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: softmax\n kwargs:\n axis: channel\n ```\n- in Python:\n >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId(\"channel\")))]",
"properties": {
"id": {
"const": "softmax",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/SoftmaxKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.SoftmaxDescr",
"type": "object"
},
"SoftmaxKwargs": {
"additionalProperties": false,
"description": "key word arguments for `SoftmaxDescr`",
"properties": {
"axis": {
"default": "channel",
"description": "The axis to apply the softmax function along.\nNote:\n Defaults to 'channel' axis\n (which may not exist, in which case\n a different axis id has to be specified).",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"title": "model.v0_5.SoftmaxKwargs",
"type": "object"
},
"SpaceInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes (`ParameterizedSize`)\n- reference to another axis with an optional offset (`SizeReference`)",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a `SizeReference` to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceInputAxis",
"type": "object"
},
"SpaceOutputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxis",
"type": "object"
},
"SpaceOutputAxisWithHalo": {
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxisWithHalo",
"type": "object"
},
"TensorflowJsWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "Version of the TensorFlow library used."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.TensorflowJsWeightsDescr",
"type": "object"
},
"TensorflowSavedModelBundleWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "Version of the TensorFlow library used."
},
"dependencies": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "environment.yaml"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Custom dependencies beyond tensorflow.\nShould include tensorflow and any version pinning has to be compatible with **tensorflow_version**."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.TensorflowSavedModelBundleWeightsDescr",
"type": "object"
},
"TimeInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes (`ParameterizedSize`)\n- reference to another axis with an optional offset (`SizeReference`)",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a `SizeReference` to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeInputAxis",
"type": "object"
},
"TimeOutputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxis",
"type": "object"
},
"TimeOutputAxisWithHalo": {
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxisWithHalo",
"type": "object"
},
"TorchscriptWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"pytorch_version": {
"$ref": "#/$defs/Version",
"description": "Version of the PyTorch library used."
}
},
"required": [
"source",
"pytorch_version"
],
"title": "model.v0_5.TorchscriptWeightsDescr",
"type": "object"
},
"Uploader": {
"additionalProperties": false,
"properties": {
"email": {
"description": "Email",
"format": "email",
"title": "Email",
"type": "string"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "name",
"title": "Name"
}
},
"required": [
"email"
],
"title": "generic.v0_2.Uploader",
"type": "object"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
},
"WeightsDescr": {
"additionalProperties": false,
"properties": {
"keras_hdf5": {
"anyOf": [
{
"$ref": "#/$defs/KerasHdf5WeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"onnx": {
"anyOf": [
{
"$ref": "#/$defs/OnnxWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"pytorch_state_dict": {
"anyOf": [
{
"$ref": "#/$defs/PytorchStateDictWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"tensorflow_js": {
"anyOf": [
{
"$ref": "#/$defs/TensorflowJsWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"tensorflow_saved_model_bundle": {
"anyOf": [
{
"$ref": "#/$defs/TensorflowSavedModelBundleWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"torchscript": {
"anyOf": [
{
"$ref": "#/$defs/TorchscriptWeightsDescr"
},
{
"type": "null"
}
],
"default": null
}
},
"title": "model.v0_5.WeightsDescr",
"type": "object"
},
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
},
"ZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract mean and divide by variance.\n\nExamples:\n Subtract tensor mean and variance\n - in YAML\n ```yaml\n preprocessing:\n - id: zero_mean_unit_variance\n ```\n - in Python\n >>> preprocessing = [ZeroMeanUnitVarianceDescr()]",
"properties": {
"id": {
"const": "zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ZeroMeanUnitVarianceDescr",
"type": "object"
},
"ZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ZeroMeanUnitVarianceDescr`",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
"type": "object"
},
"bioimageio__spec__dataset__v0_2__DatasetDescr": {
"additionalProperties": false,
"description": "A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage\nprocessing.",
"properties": {
"name": {
"description": "A human-friendly name of the resource description",
"minLength": 1,
"title": "Name",
"type": "string"
},
"description": {
"title": "Description",
"type": "string"
},
"covers": {
"description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg', '.tif', '.tiff')",
"examples": [
[
"cover.png"
]
],
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Covers",
"type": "array"
},
"id_emoji": {
"anyOf": [
{
"examples": [
"\ud83e\udd88",
"\ud83e\udda5"
],
"maxLength": 1,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "UTF-8 emoji for display alongside the `id`.",
"title": "Id Emoji"
},
"authors": {
"description": "The authors are the creators of the RDF and the primary points of contact.",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_2__Author"
},
"title": "Authors",
"type": "array"
},
"attachments": {
"anyOf": [
{
"$ref": "#/$defs/AttachmentsDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "file and other attachments"
},
"cite": {
"description": "citations",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_2__CiteEntry"
},
"title": "Cite",
"type": "array"
},
"config": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "A field for custom configuration that can contain any keys not present in the RDF spec.\nThis means you should not store, for example, a github repo URL in `config` since we already have the\n`git_repo` field defined in the spec.\nKeys in `config` may be very specific to a tool or consumer software. To avoid conflicting definitions,\nit is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,\nfor example:\n```yaml\nconfig:\n bioimageio: # here is the domain name\n my_custom_key: 3837283\n another_key:\n nested: value\n imagej: # config specific to ImageJ\n macro_dir: path/to/macro/file\n```\nIf possible, please use [`snake_case`](https://en.wikipedia.org/wiki/Snake_case) for keys in `config`.\nYou may want to list linked files additionally under `attachments` to include them when packaging a resource\n(packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains\nan altered rdf.yaml file with local references to the downloaded files)",
"examples": [
{
"bioimageio": {
"another_key": {
"nested": "value"
},
"my_custom_key": 3837283
},
"imagej": {
"macro_dir": "path/to/macro/file"
}
}
],
"title": "Config",
"type": "object"
},
"download_url": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to download the resource from (deprecated)",
"title": "Download Url"
},
"git_repo": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A URL to the Git repository where the resource is being developed.",
"examples": [
"https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
],
"title": "Git Repo"
},
"icon": {
"anyOf": [
{
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An icon for illustration",
"title": "Icon"
},
"links": {
"description": "IDs of other bioimage.io resources",
"examples": [
[
"ilastik/ilastik",
"deepimagej/deepimagej",
"zero/notebook_u-net_3d_zerocostdl4mic"
]
],
"items": {
"type": "string"
},
"title": "Links",
"type": "array"
},
"uploader": {
"anyOf": [
{
"$ref": "#/$defs/Uploader"
},
{
"type": "null"
}
],
"default": null,
"description": "The person who uploaded the model (e.g. to bioimage.io)"
},
"maintainers": {
"description": "Maintainers of this resource.\nIf not specified `authors` are maintainers and at least some of them should specify their `github_user` name",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_2__Maintainer"
},
"title": "Maintainers",
"type": "array"
},
"rdf_source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Resource description file (RDF) source; used to keep track of where an rdf.yaml was loaded from.\nDo not set this field in a YAML file.",
"title": "Rdf Source"
},
"tags": {
"description": "Associated tags",
"examples": [
[
"unet2d",
"pytorch",
"nucleus",
"segmentation",
"dsb2018"
]
],
"items": {
"type": "string"
},
"title": "Tags",
"type": "array"
},
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the resource following SemVer 2.0."
},
"version_number": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "version number (n-th published version, not the semantic version)",
"title": "Version Number"
},
"format_version": {
"const": "0.2.4",
"description": "The format version of this resource specification\n(not the `version` of the resource description)\nWhen creating a new resource always use the latest micro/patch version described here.\nThe `format_version` is important for any consumer software to understand how to parse the fields.",
"title": "Format Version",
"type": "string"
},
"badges": {
"description": "badges associated with this resource",
"items": {
"$ref": "#/$defs/BadgeDescr"
},
"title": "Badges",
"type": "array"
},
"documentation": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL or relative path to a markdown file with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.",
"examples": [
"https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
"README.md"
],
"title": "Documentation"
},
"license": {
"anyOf": [
{
"enum": [
"0BSD",
"AAL",
"Abstyles",
"AdaCore-doc",
"Adobe-2006",
"Adobe-Display-PostScript",
"Adobe-Glyph",
"Adobe-Utopia",
"ADSL",
"AFL-1.1",
"AFL-1.2",
"AFL-2.0",
"AFL-2.1",
"AFL-3.0",
"Afmparse",
"AGPL-1.0-only",
"AGPL-1.0-or-later",
"AGPL-3.0-only",
"AGPL-3.0-or-later",
"Aladdin",
"AMDPLPA",
"AML",
"AML-glslang",
"AMPAS",
"ANTLR-PD",
"ANTLR-PD-fallback",
"Apache-1.0",
"Apache-1.1",
"Apache-2.0",
"APAFML",
"APL-1.0",
"App-s2p",
"APSL-1.0",
"APSL-1.1",
"APSL-1.2",
"APSL-2.0",
"Arphic-1999",
"Artistic-1.0",
"Artistic-1.0-cl8",
"Artistic-1.0-Perl",
"Artistic-2.0",
"ASWF-Digital-Assets-1.0",
"ASWF-Digital-Assets-1.1",
"Baekmuk",
"Bahyph",
"Barr",
"bcrypt-Solar-Designer",
"Beerware",
"Bitstream-Charter",
"Bitstream-Vera",
"BitTorrent-1.0",
"BitTorrent-1.1",
"blessing",
"BlueOak-1.0.0",
"Boehm-GC",
"Borceux",
"Brian-Gladman-2-Clause",
"Brian-Gladman-3-Clause",
"BSD-1-Clause",
"BSD-2-Clause",
"BSD-2-Clause-Darwin",
"BSD-2-Clause-Patent",
"BSD-2-Clause-Views",
"BSD-3-Clause",
"BSD-3-Clause-acpica",
"BSD-3-Clause-Attribution",
"BSD-3-Clause-Clear",
"BSD-3-Clause-flex",
"BSD-3-Clause-HP",
"BSD-3-Clause-LBNL",
"BSD-3-Clause-Modification",
"BSD-3-Clause-No-Military-License",
"BSD-3-Clause-No-Nuclear-License",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-No-Nuclear-Warranty",
"BSD-3-Clause-Open-MPI",
"BSD-3-Clause-Sun",
"BSD-4-Clause",
"BSD-4-Clause-Shortened",
"BSD-4-Clause-UC",
"BSD-4.3RENO",
"BSD-4.3TAHOE",
"BSD-Advertising-Acknowledgement",
"BSD-Attribution-HPND-disclaimer",
"BSD-Inferno-Nettverk",
"BSD-Protection",
"BSD-Source-beginning-file",
"BSD-Source-Code",
"BSD-Systemics",
"BSD-Systemics-W3Works",
"BSL-1.0",
"BUSL-1.1",
"bzip2-1.0.6",
"C-UDA-1.0",
"CAL-1.0",
"CAL-1.0-Combined-Work-Exception",
"Caldera",
"Caldera-no-preamble",
"CATOSL-1.1",
"CC-BY-1.0",
"CC-BY-2.0",
"CC-BY-2.5",
"CC-BY-2.5-AU",
"CC-BY-3.0",
"CC-BY-3.0-AT",
"CC-BY-3.0-AU",
"CC-BY-3.0-DE",
"CC-BY-3.0-IGO",
"CC-BY-3.0-NL",
"CC-BY-3.0-US",
"CC-BY-4.0",
"CC-BY-NC-1.0",
"CC-BY-NC-2.0",
"CC-BY-NC-2.5",
"CC-BY-NC-3.0",
"CC-BY-NC-3.0-DE",
"CC-BY-NC-4.0",
"CC-BY-NC-ND-1.0",
"CC-BY-NC-ND-2.0",
"CC-BY-NC-ND-2.5",
"CC-BY-NC-ND-3.0",
"CC-BY-NC-ND-3.0-DE",
"CC-BY-NC-ND-3.0-IGO",
"CC-BY-NC-ND-4.0",
"CC-BY-NC-SA-1.0",
"CC-BY-NC-SA-2.0",
"CC-BY-NC-SA-2.0-DE",
"CC-BY-NC-SA-2.0-FR",
"CC-BY-NC-SA-2.0-UK",
"CC-BY-NC-SA-2.5",
"CC-BY-NC-SA-3.0",
"CC-BY-NC-SA-3.0-DE",
"CC-BY-NC-SA-3.0-IGO",
"CC-BY-NC-SA-4.0",
"CC-BY-ND-1.0",
"CC-BY-ND-2.0",
"CC-BY-ND-2.5",
"CC-BY-ND-3.0",
"CC-BY-ND-3.0-DE",
"CC-BY-ND-4.0",
"CC-BY-SA-1.0",
"CC-BY-SA-2.0",
"CC-BY-SA-2.0-UK",
"CC-BY-SA-2.1-JP",
"CC-BY-SA-2.5",
"CC-BY-SA-3.0",
"CC-BY-SA-3.0-AT",
"CC-BY-SA-3.0-DE",
"CC-BY-SA-3.0-IGO",
"CC-BY-SA-4.0",
"CC-PDDC",
"CC0-1.0",
"CDDL-1.0",
"CDDL-1.1",
"CDL-1.0",
"CDLA-Permissive-1.0",
"CDLA-Permissive-2.0",
"CDLA-Sharing-1.0",
"CECILL-1.0",
"CECILL-1.1",
"CECILL-2.0",
"CECILL-2.1",
"CECILL-B",
"CECILL-C",
"CERN-OHL-1.1",
"CERN-OHL-1.2",
"CERN-OHL-P-2.0",
"CERN-OHL-S-2.0",
"CERN-OHL-W-2.0",
"CFITSIO",
"check-cvs",
"checkmk",
"ClArtistic",
"Clips",
"CMU-Mach",
"CMU-Mach-nodoc",
"CNRI-Jython",
"CNRI-Python",
"CNRI-Python-GPL-Compatible",
"COIL-1.0",
"Community-Spec-1.0",
"Condor-1.1",
"copyleft-next-0.3.0",
"copyleft-next-0.3.1",
"Cornell-Lossless-JPEG",
"CPAL-1.0",
"CPL-1.0",
"CPOL-1.02",
"Cronyx",
"Crossword",
"CrystalStacker",
"CUA-OPL-1.0",
"Cube",
"curl",
"D-FSL-1.0",
"DEC-3-Clause",
"diffmark",
"DL-DE-BY-2.0",
"DL-DE-ZERO-2.0",
"DOC",
"Dotseqn",
"DRL-1.0",
"DRL-1.1",
"DSDP",
"dtoa",
"dvipdfm",
"ECL-1.0",
"ECL-2.0",
"EFL-1.0",
"EFL-2.0",
"eGenix",
"Elastic-2.0",
"Entessa",
"EPICS",
"EPL-1.0",
"EPL-2.0",
"ErlPL-1.1",
"etalab-2.0",
"EUDatagrid",
"EUPL-1.0",
"EUPL-1.1",
"EUPL-1.2",
"Eurosym",
"Fair",
"FBM",
"FDK-AAC",
"Ferguson-Twofish",
"Frameworx-1.0",
"FreeBSD-DOC",
"FreeImage",
"FSFAP",
"FSFAP-no-warranty-disclaimer",
"FSFUL",
"FSFULLR",
"FSFULLRWD",
"FTL",
"Furuseth",
"fwlw",
"GCR-docs",
"GD",
"GFDL-1.1-invariants-only",
"GFDL-1.1-invariants-or-later",
"GFDL-1.1-no-invariants-only",
"GFDL-1.1-no-invariants-or-later",
"GFDL-1.1-only",
"GFDL-1.1-or-later",
"GFDL-1.2-invariants-only",
"GFDL-1.2-invariants-or-later",
"GFDL-1.2-no-invariants-only",
"GFDL-1.2-no-invariants-or-later",
"GFDL-1.2-only",
"GFDL-1.2-or-later",
"GFDL-1.3-invariants-only",
"GFDL-1.3-invariants-or-later",
"GFDL-1.3-no-invariants-only",
"GFDL-1.3-no-invariants-or-later",
"GFDL-1.3-only",
"GFDL-1.3-or-later",
"Giftware",
"GL2PS",
"Glide",
"Glulxe",
"GLWTPL",
"gnuplot",
"GPL-1.0-only",
"GPL-1.0-or-later",
"GPL-2.0-only",
"GPL-2.0-or-later",
"GPL-3.0-only",
"GPL-3.0-or-later",
"Graphics-Gems",
"gSOAP-1.3b",
"gtkbook",
"HaskellReport",
"hdparm",
"Hippocratic-2.1",
"HP-1986",
"HP-1989",
"HPND",
"HPND-DEC",
"HPND-doc",
"HPND-doc-sell",
"HPND-export-US",
"HPND-export-US-modify",
"HPND-Fenneberg-Livingston",
"HPND-INRIA-IMAG",
"HPND-Kevlin-Henney",
"HPND-Markus-Kuhn",
"HPND-MIT-disclaimer",
"HPND-Pbmplus",
"HPND-sell-MIT-disclaimer-xserver",
"HPND-sell-regexpr",
"HPND-sell-variant",
"HPND-sell-variant-MIT-disclaimer",
"HPND-UC",
"HTMLTIDY",
"IBM-pibs",
"ICU",
"IEC-Code-Components-EULA",
"IJG",
"IJG-short",
"ImageMagick",
"iMatix",
"Imlib2",
"Info-ZIP",
"Inner-Net-2.0",
"Intel",
"Intel-ACPI",
"Interbase-1.0",
"IPA",
"IPL-1.0",
"ISC",
"ISC-Veillard",
"Jam",
"JasPer-2.0",
"JPL-image",
"JPNIC",
"JSON",
"Kastrup",
"Kazlib",
"Knuth-CTAN",
"LAL-1.2",
"LAL-1.3",
"Latex2e",
"Latex2e-translated-notice",
"Leptonica",
"LGPL-2.0-only",
"LGPL-2.0-or-later",
"LGPL-2.1-only",
"LGPL-2.1-or-later",
"LGPL-3.0-only",
"LGPL-3.0-or-later",
"LGPLLR",
"Libpng",
"libpng-2.0",
"libselinux-1.0",
"libtiff",
"libutil-David-Nugent",
"LiLiQ-P-1.1",
"LiLiQ-R-1.1",
"LiLiQ-Rplus-1.1",
"Linux-man-pages-1-para",
"Linux-man-pages-copyleft",
"Linux-man-pages-copyleft-2-para",
"Linux-man-pages-copyleft-var",
"Linux-OpenIB",
"LOOP",
"LPD-document",
"LPL-1.0",
"LPL-1.02",
"LPPL-1.0",
"LPPL-1.1",
"LPPL-1.2",
"LPPL-1.3a",
"LPPL-1.3c",
"lsof",
"Lucida-Bitmap-Fonts",
"LZMA-SDK-9.11-to-9.20",
"LZMA-SDK-9.22",
"Mackerras-3-Clause",
"Mackerras-3-Clause-acknowledgment",
"magaz",
"mailprio",
"MakeIndex",
"Martin-Birgmeier",
"McPhee-slideshow",
"metamail",
"Minpack",
"MirOS",
"MIT",
"MIT-0",
"MIT-advertising",
"MIT-CMU",
"MIT-enna",
"MIT-feh",
"MIT-Festival",
"MIT-Modern-Variant",
"MIT-open-group",
"MIT-testregex",
"MIT-Wu",
"MITNFA",
"MMIXware",
"Motosoto",
"MPEG-SSG",
"mpi-permissive",
"mpich2",
"MPL-1.0",
"MPL-1.1",
"MPL-2.0",
"MPL-2.0-no-copyleft-exception",
"mplus",
"MS-LPL",
"MS-PL",
"MS-RL",
"MTLL",
"MulanPSL-1.0",
"MulanPSL-2.0",
"Multics",
"Mup",
"NAIST-2003",
"NASA-1.3",
"Naumen",
"NBPL-1.0",
"NCGL-UK-2.0",
"NCSA",
"Net-SNMP",
"NetCDF",
"Newsletr",
"NGPL",
"NICTA-1.0",
"NIST-PD",
"NIST-PD-fallback",
"NIST-Software",
"NLOD-1.0",
"NLOD-2.0",
"NLPL",
"Nokia",
"NOSL",
"Noweb",
"NPL-1.0",
"NPL-1.1",
"NPOSL-3.0",
"NRL",
"NTP",
"NTP-0",
"O-UDA-1.0",
"OCCT-PL",
"OCLC-2.0",
"ODbL-1.0",
"ODC-By-1.0",
"OFFIS",
"OFL-1.0",
"OFL-1.0-no-RFN",
"OFL-1.0-RFN",
"OFL-1.1",
"OFL-1.1-no-RFN",
"OFL-1.1-RFN",
"OGC-1.0",
"OGDL-Taiwan-1.0",
"OGL-Canada-2.0",
"OGL-UK-1.0",
"OGL-UK-2.0",
"OGL-UK-3.0",
"OGTSL",
"OLDAP-1.1",
"OLDAP-1.2",
"OLDAP-1.3",
"OLDAP-1.4",
"OLDAP-2.0",
"OLDAP-2.0.1",
"OLDAP-2.1",
"OLDAP-2.2",
"OLDAP-2.2.1",
"OLDAP-2.2.2",
"OLDAP-2.3",
"OLDAP-2.4",
"OLDAP-2.5",
"OLDAP-2.6",
"OLDAP-2.7",
"OLDAP-2.8",
"OLFL-1.3",
"OML",
"OpenPBS-2.3",
"OpenSSL",
"OpenSSL-standalone",
"OpenVision",
"OPL-1.0",
"OPL-UK-3.0",
"OPUBL-1.0",
"OSET-PL-2.1",
"OSL-1.0",
"OSL-1.1",
"OSL-2.0",
"OSL-2.1",
"OSL-3.0",
"PADL",
"Parity-6.0.0",
"Parity-7.0.0",
"PDDL-1.0",
"PHP-3.0",
"PHP-3.01",
"Pixar",
"Plexus",
"pnmstitch",
"PolyForm-Noncommercial-1.0.0",
"PolyForm-Small-Business-1.0.0",
"PostgreSQL",
"PSF-2.0",
"psfrag",
"psutils",
"Python-2.0",
"Python-2.0.1",
"python-ldap",
"Qhull",
"QPL-1.0",
"QPL-1.0-INRIA-2004",
"radvd",
"Rdisc",
"RHeCos-1.1",
"RPL-1.1",
"RPL-1.5",
"RPSL-1.0",
"RSA-MD",
"RSCPL",
"Ruby",
"SAX-PD",
"SAX-PD-2.0",
"Saxpath",
"SCEA",
"SchemeReport",
"Sendmail",
"Sendmail-8.23",
"SGI-B-1.0",
"SGI-B-1.1",
"SGI-B-2.0",
"SGI-OpenGL",
"SGP4",
"SHL-0.5",
"SHL-0.51",
"SimPL-2.0",
"SISSL",
"SISSL-1.2",
"SL",
"Sleepycat",
"SMLNJ",
"SMPPL",
"SNIA",
"snprintf",
"softSurfer",
"Soundex",
"Spencer-86",
"Spencer-94",
"Spencer-99",
"SPL-1.0",
"ssh-keyscan",
"SSH-OpenSSH",
"SSH-short",
"SSLeay-standalone",
"SSPL-1.0",
"SugarCRM-1.1.3",
"Sun-PPP",
"SunPro",
"SWL",
"swrule",
"Symlinks",
"TAPR-OHL-1.0",
"TCL",
"TCP-wrappers",
"TermReadKey",
"TGPPL-1.0",
"TMate",
"TORQUE-1.1",
"TOSL",
"TPDL",
"TPL-1.0",
"TTWL",
"TTYP0",
"TU-Berlin-1.0",
"TU-Berlin-2.0",
"UCAR",
"UCL-1.0",
"ulem",
"UMich-Merit",
"Unicode-3.0",
"Unicode-DFS-2015",
"Unicode-DFS-2016",
"Unicode-TOU",
"UnixCrypt",
"Unlicense",
"UPL-1.0",
"URT-RLE",
"Vim",
"VOSTROM",
"VSL-1.0",
"W3C",
"W3C-19980720",
"W3C-20150513",
"w3m",
"Watcom-1.0",
"Widget-Workshop",
"Wsuipa",
"WTFPL",
"X11",
"X11-distribute-modifications-variant",
"Xdebug-1.03",
"Xerox",
"Xfig",
"XFree86-1.1",
"xinetd",
"xkeyboard-config-Zinoviev",
"xlock",
"Xnet",
"xpp",
"XSkat",
"YPL-1.0",
"YPL-1.1",
"Zed",
"Zeeff",
"Zend-2.0",
"Zimbra-1.3",
"Zimbra-1.4",
"Zlib",
"zlib-acknowledgement",
"ZPL-1.1",
"ZPL-2.0",
"ZPL-2.1"
],
"title": "LicenseId",
"type": "string"
},
{
"enum": [
"AGPL-1.0",
"AGPL-3.0",
"BSD-2-Clause-FreeBSD",
"BSD-2-Clause-NetBSD",
"bzip2-1.0.5",
"eCos-2.0",
"GFDL-1.1",
"GFDL-1.2",
"GFDL-1.3",
"GPL-1.0",
"GPL-1.0+",
"GPL-2.0",
"GPL-2.0+",
"GPL-2.0-with-autoconf-exception",
"GPL-2.0-with-bison-exception",
"GPL-2.0-with-classpath-exception",
"GPL-2.0-with-font-exception",
"GPL-2.0-with-GCC-exception",
"GPL-3.0",
"GPL-3.0+",
"GPL-3.0-with-autoconf-exception",
"GPL-3.0-with-GCC-exception",
"LGPL-2.0",
"LGPL-2.0+",
"LGPL-2.1",
"LGPL-2.1+",
"LGPL-3.0",
"LGPL-3.0+",
"Nunit",
"StandardML-NJ",
"wxWindows"
],
"title": "DeprecatedLicenseId",
"type": "string"
},
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose\n) to discuss your intentions with the community.",
"examples": [
"CC0-1.0",
"MIT",
"BSD-2-Clause"
],
"title": "License"
},
"type": {
"const": "dataset",
"title": "Type",
"type": "string"
},
"id": {
"anyOf": [
{
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
"title": "Id"
},
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "\"URL to the source of the dataset.",
"title": "Source"
}
},
"required": [
"name",
"description",
"format_version",
"type"
],
"title": "dataset 0.2.4",
"type": "object"
},
"bioimageio__spec__dataset__v0_3__DatasetDescr": {
"additionalProperties": false,
"description": "A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage\nprocessing.",
"properties": {
"name": {
"description": "A human-friendly name of the resource description.\nMay only contains letters, digits, underscore, minus, parentheses and spaces.",
"maxLength": 128,
"minLength": 5,
"title": "Name",
"type": "string"
},
"description": {
"default": "",
"description": "A string containing a brief description.",
"maxLength": 1024,
"title": "Description",
"type": "string"
},
"covers": {
"description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1 or 1:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg')",
"examples": [
[
"cover.png"
]
],
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Covers",
"type": "array"
},
"id_emoji": {
"anyOf": [
{
"examples": [
"\ud83e\udd88",
"\ud83e\udda5"
],
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "UTF-8 emoji for display alongside the `id`.",
"title": "Id Emoji"
},
"authors": {
"description": "The authors are the creators of this resource description and the primary points of contact.",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"title": "Authors",
"type": "array"
},
"attachments": {
"description": "file attachments",
"items": {
"$ref": "#/$defs/FileDescr"
},
"title": "Attachments",
"type": "array"
},
"cite": {
"description": "citations",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__CiteEntry"
},
"title": "Cite",
"type": "array"
},
"license": {
"anyOf": [
{
"enum": [
"0BSD",
"AAL",
"Abstyles",
"AdaCore-doc",
"Adobe-2006",
"Adobe-Display-PostScript",
"Adobe-Glyph",
"Adobe-Utopia",
"ADSL",
"AFL-1.1",
"AFL-1.2",
"AFL-2.0",
"AFL-2.1",
"AFL-3.0",
"Afmparse",
"AGPL-1.0-only",
"AGPL-1.0-or-later",
"AGPL-3.0-only",
"AGPL-3.0-or-later",
"Aladdin",
"AMDPLPA",
"AML",
"AML-glslang",
"AMPAS",
"ANTLR-PD",
"ANTLR-PD-fallback",
"Apache-1.0",
"Apache-1.1",
"Apache-2.0",
"APAFML",
"APL-1.0",
"App-s2p",
"APSL-1.0",
"APSL-1.1",
"APSL-1.2",
"APSL-2.0",
"Arphic-1999",
"Artistic-1.0",
"Artistic-1.0-cl8",
"Artistic-1.0-Perl",
"Artistic-2.0",
"ASWF-Digital-Assets-1.0",
"ASWF-Digital-Assets-1.1",
"Baekmuk",
"Bahyph",
"Barr",
"bcrypt-Solar-Designer",
"Beerware",
"Bitstream-Charter",
"Bitstream-Vera",
"BitTorrent-1.0",
"BitTorrent-1.1",
"blessing",
"BlueOak-1.0.0",
"Boehm-GC",
"Borceux",
"Brian-Gladman-2-Clause",
"Brian-Gladman-3-Clause",
"BSD-1-Clause",
"BSD-2-Clause",
"BSD-2-Clause-Darwin",
"BSD-2-Clause-Patent",
"BSD-2-Clause-Views",
"BSD-3-Clause",
"BSD-3-Clause-acpica",
"BSD-3-Clause-Attribution",
"BSD-3-Clause-Clear",
"BSD-3-Clause-flex",
"BSD-3-Clause-HP",
"BSD-3-Clause-LBNL",
"BSD-3-Clause-Modification",
"BSD-3-Clause-No-Military-License",
"BSD-3-Clause-No-Nuclear-License",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-No-Nuclear-Warranty",
"BSD-3-Clause-Open-MPI",
"BSD-3-Clause-Sun",
"BSD-4-Clause",
"BSD-4-Clause-Shortened",
"BSD-4-Clause-UC",
"BSD-4.3RENO",
"BSD-4.3TAHOE",
"BSD-Advertising-Acknowledgement",
"BSD-Attribution-HPND-disclaimer",
"BSD-Inferno-Nettverk",
"BSD-Protection",
"BSD-Source-beginning-file",
"BSD-Source-Code",
"BSD-Systemics",
"BSD-Systemics-W3Works",
"BSL-1.0",
"BUSL-1.1",
"bzip2-1.0.6",
"C-UDA-1.0",
"CAL-1.0",
"CAL-1.0-Combined-Work-Exception",
"Caldera",
"Caldera-no-preamble",
"CATOSL-1.1",
"CC-BY-1.0",
"CC-BY-2.0",
"CC-BY-2.5",
"CC-BY-2.5-AU",
"CC-BY-3.0",
"CC-BY-3.0-AT",
"CC-BY-3.0-AU",
"CC-BY-3.0-DE",
"CC-BY-3.0-IGO",
"CC-BY-3.0-NL",
"CC-BY-3.0-US",
"CC-BY-4.0",
"CC-BY-NC-1.0",
"CC-BY-NC-2.0",
"CC-BY-NC-2.5",
"CC-BY-NC-3.0",
"CC-BY-NC-3.0-DE",
"CC-BY-NC-4.0",
"CC-BY-NC-ND-1.0",
"CC-BY-NC-ND-2.0",
"CC-BY-NC-ND-2.5",
"CC-BY-NC-ND-3.0",
"CC-BY-NC-ND-3.0-DE",
"CC-BY-NC-ND-3.0-IGO",
"CC-BY-NC-ND-4.0",
"CC-BY-NC-SA-1.0",
"CC-BY-NC-SA-2.0",
"CC-BY-NC-SA-2.0-DE",
"CC-BY-NC-SA-2.0-FR",
"CC-BY-NC-SA-2.0-UK",
"CC-BY-NC-SA-2.5",
"CC-BY-NC-SA-3.0",
"CC-BY-NC-SA-3.0-DE",
"CC-BY-NC-SA-3.0-IGO",
"CC-BY-NC-SA-4.0",
"CC-BY-ND-1.0",
"CC-BY-ND-2.0",
"CC-BY-ND-2.5",
"CC-BY-ND-3.0",
"CC-BY-ND-3.0-DE",
"CC-BY-ND-4.0",
"CC-BY-SA-1.0",
"CC-BY-SA-2.0",
"CC-BY-SA-2.0-UK",
"CC-BY-SA-2.1-JP",
"CC-BY-SA-2.5",
"CC-BY-SA-3.0",
"CC-BY-SA-3.0-AT",
"CC-BY-SA-3.0-DE",
"CC-BY-SA-3.0-IGO",
"CC-BY-SA-4.0",
"CC-PDDC",
"CC0-1.0",
"CDDL-1.0",
"CDDL-1.1",
"CDL-1.0",
"CDLA-Permissive-1.0",
"CDLA-Permissive-2.0",
"CDLA-Sharing-1.0",
"CECILL-1.0",
"CECILL-1.1",
"CECILL-2.0",
"CECILL-2.1",
"CECILL-B",
"CECILL-C",
"CERN-OHL-1.1",
"CERN-OHL-1.2",
"CERN-OHL-P-2.0",
"CERN-OHL-S-2.0",
"CERN-OHL-W-2.0",
"CFITSIO",
"check-cvs",
"checkmk",
"ClArtistic",
"Clips",
"CMU-Mach",
"CMU-Mach-nodoc",
"CNRI-Jython",
"CNRI-Python",
"CNRI-Python-GPL-Compatible",
"COIL-1.0",
"Community-Spec-1.0",
"Condor-1.1",
"copyleft-next-0.3.0",
"copyleft-next-0.3.1",
"Cornell-Lossless-JPEG",
"CPAL-1.0",
"CPL-1.0",
"CPOL-1.02",
"Cronyx",
"Crossword",
"CrystalStacker",
"CUA-OPL-1.0",
"Cube",
"curl",
"D-FSL-1.0",
"DEC-3-Clause",
"diffmark",
"DL-DE-BY-2.0",
"DL-DE-ZERO-2.0",
"DOC",
"Dotseqn",
"DRL-1.0",
"DRL-1.1",
"DSDP",
"dtoa",
"dvipdfm",
"ECL-1.0",
"ECL-2.0",
"EFL-1.0",
"EFL-2.0",
"eGenix",
"Elastic-2.0",
"Entessa",
"EPICS",
"EPL-1.0",
"EPL-2.0",
"ErlPL-1.1",
"etalab-2.0",
"EUDatagrid",
"EUPL-1.0",
"EUPL-1.1",
"EUPL-1.2",
"Eurosym",
"Fair",
"FBM",
"FDK-AAC",
"Ferguson-Twofish",
"Frameworx-1.0",
"FreeBSD-DOC",
"FreeImage",
"FSFAP",
"FSFAP-no-warranty-disclaimer",
"FSFUL",
"FSFULLR",
"FSFULLRWD",
"FTL",
"Furuseth",
"fwlw",
"GCR-docs",
"GD",
"GFDL-1.1-invariants-only",
"GFDL-1.1-invariants-or-later",
"GFDL-1.1-no-invariants-only",
"GFDL-1.1-no-invariants-or-later",
"GFDL-1.1-only",
"GFDL-1.1-or-later",
"GFDL-1.2-invariants-only",
"GFDL-1.2-invariants-or-later",
"GFDL-1.2-no-invariants-only",
"GFDL-1.2-no-invariants-or-later",
"GFDL-1.2-only",
"GFDL-1.2-or-later",
"GFDL-1.3-invariants-only",
"GFDL-1.3-invariants-or-later",
"GFDL-1.3-no-invariants-only",
"GFDL-1.3-no-invariants-or-later",
"GFDL-1.3-only",
"GFDL-1.3-or-later",
"Giftware",
"GL2PS",
"Glide",
"Glulxe",
"GLWTPL",
"gnuplot",
"GPL-1.0-only",
"GPL-1.0-or-later",
"GPL-2.0-only",
"GPL-2.0-or-later",
"GPL-3.0-only",
"GPL-3.0-or-later",
"Graphics-Gems",
"gSOAP-1.3b",
"gtkbook",
"HaskellReport",
"hdparm",
"Hippocratic-2.1",
"HP-1986",
"HP-1989",
"HPND",
"HPND-DEC",
"HPND-doc",
"HPND-doc-sell",
"HPND-export-US",
"HPND-export-US-modify",
"HPND-Fenneberg-Livingston",
"HPND-INRIA-IMAG",
"HPND-Kevlin-Henney",
"HPND-Markus-Kuhn",
"HPND-MIT-disclaimer",
"HPND-Pbmplus",
"HPND-sell-MIT-disclaimer-xserver",
"HPND-sell-regexpr",
"HPND-sell-variant",
"HPND-sell-variant-MIT-disclaimer",
"HPND-UC",
"HTMLTIDY",
"IBM-pibs",
"ICU",
"IEC-Code-Components-EULA",
"IJG",
"IJG-short",
"ImageMagick",
"iMatix",
"Imlib2",
"Info-ZIP",
"Inner-Net-2.0",
"Intel",
"Intel-ACPI",
"Interbase-1.0",
"IPA",
"IPL-1.0",
"ISC",
"ISC-Veillard",
"Jam",
"JasPer-2.0",
"JPL-image",
"JPNIC",
"JSON",
"Kastrup",
"Kazlib",
"Knuth-CTAN",
"LAL-1.2",
"LAL-1.3",
"Latex2e",
"Latex2e-translated-notice",
"Leptonica",
"LGPL-2.0-only",
"LGPL-2.0-or-later",
"LGPL-2.1-only",
"LGPL-2.1-or-later",
"LGPL-3.0-only",
"LGPL-3.0-or-later",
"LGPLLR",
"Libpng",
"libpng-2.0",
"libselinux-1.0",
"libtiff",
"libutil-David-Nugent",
"LiLiQ-P-1.1",
"LiLiQ-R-1.1",
"LiLiQ-Rplus-1.1",
"Linux-man-pages-1-para",
"Linux-man-pages-copyleft",
"Linux-man-pages-copyleft-2-para",
"Linux-man-pages-copyleft-var",
"Linux-OpenIB",
"LOOP",
"LPD-document",
"LPL-1.0",
"LPL-1.02",
"LPPL-1.0",
"LPPL-1.1",
"LPPL-1.2",
"LPPL-1.3a",
"LPPL-1.3c",
"lsof",
"Lucida-Bitmap-Fonts",
"LZMA-SDK-9.11-to-9.20",
"LZMA-SDK-9.22",
"Mackerras-3-Clause",
"Mackerras-3-Clause-acknowledgment",
"magaz",
"mailprio",
"MakeIndex",
"Martin-Birgmeier",
"McPhee-slideshow",
"metamail",
"Minpack",
"MirOS",
"MIT",
"MIT-0",
"MIT-advertising",
"MIT-CMU",
"MIT-enna",
"MIT-feh",
"MIT-Festival",
"MIT-Modern-Variant",
"MIT-open-group",
"MIT-testregex",
"MIT-Wu",
"MITNFA",
"MMIXware",
"Motosoto",
"MPEG-SSG",
"mpi-permissive",
"mpich2",
"MPL-1.0",
"MPL-1.1",
"MPL-2.0",
"MPL-2.0-no-copyleft-exception",
"mplus",
"MS-LPL",
"MS-PL",
"MS-RL",
"MTLL",
"MulanPSL-1.0",
"MulanPSL-2.0",
"Multics",
"Mup",
"NAIST-2003",
"NASA-1.3",
"Naumen",
"NBPL-1.0",
"NCGL-UK-2.0",
"NCSA",
"Net-SNMP",
"NetCDF",
"Newsletr",
"NGPL",
"NICTA-1.0",
"NIST-PD",
"NIST-PD-fallback",
"NIST-Software",
"NLOD-1.0",
"NLOD-2.0",
"NLPL",
"Nokia",
"NOSL",
"Noweb",
"NPL-1.0",
"NPL-1.1",
"NPOSL-3.0",
"NRL",
"NTP",
"NTP-0",
"O-UDA-1.0",
"OCCT-PL",
"OCLC-2.0",
"ODbL-1.0",
"ODC-By-1.0",
"OFFIS",
"OFL-1.0",
"OFL-1.0-no-RFN",
"OFL-1.0-RFN",
"OFL-1.1",
"OFL-1.1-no-RFN",
"OFL-1.1-RFN",
"OGC-1.0",
"OGDL-Taiwan-1.0",
"OGL-Canada-2.0",
"OGL-UK-1.0",
"OGL-UK-2.0",
"OGL-UK-3.0",
"OGTSL",
"OLDAP-1.1",
"OLDAP-1.2",
"OLDAP-1.3",
"OLDAP-1.4",
"OLDAP-2.0",
"OLDAP-2.0.1",
"OLDAP-2.1",
"OLDAP-2.2",
"OLDAP-2.2.1",
"OLDAP-2.2.2",
"OLDAP-2.3",
"OLDAP-2.4",
"OLDAP-2.5",
"OLDAP-2.6",
"OLDAP-2.7",
"OLDAP-2.8",
"OLFL-1.3",
"OML",
"OpenPBS-2.3",
"OpenSSL",
"OpenSSL-standalone",
"OpenVision",
"OPL-1.0",
"OPL-UK-3.0",
"OPUBL-1.0",
"OSET-PL-2.1",
"OSL-1.0",
"OSL-1.1",
"OSL-2.0",
"OSL-2.1",
"OSL-3.0",
"PADL",
"Parity-6.0.0",
"Parity-7.0.0",
"PDDL-1.0",
"PHP-3.0",
"PHP-3.01",
"Pixar",
"Plexus",
"pnmstitch",
"PolyForm-Noncommercial-1.0.0",
"PolyForm-Small-Business-1.0.0",
"PostgreSQL",
"PSF-2.0",
"psfrag",
"psutils",
"Python-2.0",
"Python-2.0.1",
"python-ldap",
"Qhull",
"QPL-1.0",
"QPL-1.0-INRIA-2004",
"radvd",
"Rdisc",
"RHeCos-1.1",
"RPL-1.1",
"RPL-1.5",
"RPSL-1.0",
"RSA-MD",
"RSCPL",
"Ruby",
"SAX-PD",
"SAX-PD-2.0",
"Saxpath",
"SCEA",
"SchemeReport",
"Sendmail",
"Sendmail-8.23",
"SGI-B-1.0",
"SGI-B-1.1",
"SGI-B-2.0",
"SGI-OpenGL",
"SGP4",
"SHL-0.5",
"SHL-0.51",
"SimPL-2.0",
"SISSL",
"SISSL-1.2",
"SL",
"Sleepycat",
"SMLNJ",
"SMPPL",
"SNIA",
"snprintf",
"softSurfer",
"Soundex",
"Spencer-86",
"Spencer-94",
"Spencer-99",
"SPL-1.0",
"ssh-keyscan",
"SSH-OpenSSH",
"SSH-short",
"SSLeay-standalone",
"SSPL-1.0",
"SugarCRM-1.1.3",
"Sun-PPP",
"SunPro",
"SWL",
"swrule",
"Symlinks",
"TAPR-OHL-1.0",
"TCL",
"TCP-wrappers",
"TermReadKey",
"TGPPL-1.0",
"TMate",
"TORQUE-1.1",
"TOSL",
"TPDL",
"TPL-1.0",
"TTWL",
"TTYP0",
"TU-Berlin-1.0",
"TU-Berlin-2.0",
"UCAR",
"UCL-1.0",
"ulem",
"UMich-Merit",
"Unicode-3.0",
"Unicode-DFS-2015",
"Unicode-DFS-2016",
"Unicode-TOU",
"UnixCrypt",
"Unlicense",
"UPL-1.0",
"URT-RLE",
"Vim",
"VOSTROM",
"VSL-1.0",
"W3C",
"W3C-19980720",
"W3C-20150513",
"w3m",
"Watcom-1.0",
"Widget-Workshop",
"Wsuipa",
"WTFPL",
"X11",
"X11-distribute-modifications-variant",
"Xdebug-1.03",
"Xerox",
"Xfig",
"XFree86-1.1",
"xinetd",
"xkeyboard-config-Zinoviev",
"xlock",
"Xnet",
"xpp",
"XSkat",
"YPL-1.0",
"YPL-1.1",
"Zed",
"Zeeff",
"Zend-2.0",
"Zimbra-1.3",
"Zimbra-1.4",
"Zlib",
"zlib-acknowledgement",
"ZPL-1.1",
"ZPL-2.0",
"ZPL-2.1"
],
"title": "LicenseId",
"type": "string"
},
{
"enum": [
"AGPL-1.0",
"AGPL-3.0",
"BSD-2-Clause-FreeBSD",
"BSD-2-Clause-NetBSD",
"bzip2-1.0.5",
"eCos-2.0",
"GFDL-1.1",
"GFDL-1.2",
"GFDL-1.3",
"GPL-1.0",
"GPL-1.0+",
"GPL-2.0",
"GPL-2.0+",
"GPL-2.0-with-autoconf-exception",
"GPL-2.0-with-bison-exception",
"GPL-2.0-with-classpath-exception",
"GPL-2.0-with-font-exception",
"GPL-2.0-with-GCC-exception",
"GPL-3.0",
"GPL-3.0+",
"GPL-3.0-with-autoconf-exception",
"GPL-3.0-with-GCC-exception",
"LGPL-2.0",
"LGPL-2.0+",
"LGPL-2.1",
"LGPL-2.1+",
"LGPL-3.0",
"LGPL-3.0+",
"Nunit",
"StandardML-NJ",
"wxWindows"
],
"title": "DeprecatedLicenseId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose)\nto discuss your intentions with the community.",
"examples": [
"CC0-1.0",
"MIT",
"BSD-2-Clause"
],
"title": "License"
},
"git_repo": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A URL to the Git repository where the resource is being developed.",
"examples": [
"https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
],
"title": "Git Repo"
},
"icon": {
"anyOf": [
{
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An icon for illustration, e.g. on bioimage.io",
"title": "Icon"
},
"links": {
"description": "IDs of other bioimage.io resources",
"examples": [
[
"ilastik/ilastik",
"deepimagej/deepimagej",
"zero/notebook_u-net_3d_zerocostdl4mic"
]
],
"items": {
"type": "string"
},
"title": "Links",
"type": "array"
},
"uploader": {
"anyOf": [
{
"$ref": "#/$defs/Uploader"
},
{
"type": "null"
}
],
"default": null,
"description": "The person who uploaded the model (e.g. to bioimage.io)"
},
"maintainers": {
"description": "Maintainers of this resource.\nIf not specified, `authors` are maintainers and at least some of them has to specify their `github_user` name",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Maintainer"
},
"title": "Maintainers",
"type": "array"
},
"tags": {
"description": "Associated tags",
"examples": [
[
"unet2d",
"pytorch",
"nucleus",
"segmentation",
"dsb2018"
]
],
"items": {
"type": "string"
},
"title": "Tags",
"type": "array"
},
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the resource following SemVer 2.0."
},
"version_comment": {
"anyOf": [
{
"maxLength": 512,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A comment on the version of the resource.",
"title": "Version Comment"
},
"format_version": {
"const": "0.3.0",
"description": "The **format** version of this resource specification",
"title": "Format Version",
"type": "string"
},
"documentation": {
"anyOf": [
{
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"examples": [
"https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
"README.md"
]
},
{
"type": "null"
}
],
"default": null,
"description": "URL or relative path to a markdown file encoded in UTF-8 with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.",
"title": "Documentation"
},
"badges": {
"description": "badges associated with this resource",
"items": {
"$ref": "#/$defs/BadgeDescr"
},
"title": "Badges",
"type": "array"
},
"config": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Config",
"description": "A field for custom configuration that can contain any keys not present in the RDF spec.\nThis means you should not store, for example, a GitHub repo URL in `config` since there is a `git_repo` field.\nKeys in `config` may be very specific to a tool or consumer software. To avoid conflicting definitions,\nit is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,\nfor example:\n```yaml\nconfig:\n giraffe_neckometer: # here is the domain name\n length: 3837283\n address:\n home: zoo\n imagej: # config specific to ImageJ\n macro_dir: path/to/macro/file\n```\nIf possible, please use [`snake_case`](https://en.wikipedia.org/wiki/Snake_case) for keys in `config`.\nYou may want to list linked files additionally under `attachments` to include them when packaging a resource.\n(Packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains\nan altered rdf.yaml file with local references to the downloaded files.)"
},
"type": {
"const": "dataset",
"title": "Type",
"type": "string"
},
"id": {
"anyOf": [
{
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
"title": "Id"
},
"parent": {
"anyOf": [
{
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The description from which this one is derived",
"title": "Parent"
},
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "\"URL to the source of the dataset.",
"title": "Source"
}
},
"required": [
"name",
"format_version",
"type"
],
"title": "dataset 0.3.0",
"type": "object"
},
"bioimageio__spec__generic__v0_2__Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_2.Author",
"type": "object"
},
"bioimageio__spec__generic__v0_2__CiteEntry": {
"additionalProperties": false,
"properties": {
"text": {
"description": "free text description",
"title": "Text",
"type": "string"
},
"doi": {
"anyOf": [
{
"description": "A digital object identifier, see https://www.doi.org/",
"pattern": "^10\\.[0-9]{4}.+$",
"title": "Doi",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details. (alternatively specify `url`)",
"title": "Doi"
},
"url": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to cite (preferably specify a `doi` instead)",
"title": "Url"
}
},
"required": [
"text"
],
"title": "generic.v0_2.CiteEntry",
"type": "object"
},
"bioimageio__spec__generic__v0_2__Maintainer": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Name"
},
"github_user": {
"title": "Github User",
"type": "string"
}
},
"required": [
"github_user"
],
"title": "generic.v0_2.Maintainer",
"type": "object"
},
"bioimageio__spec__generic__v0_3__Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"bioimageio__spec__generic__v0_3__BioimageioConfig": {
"additionalProperties": true,
"description": "bioimage.io internal metadata.",
"properties": {},
"title": "generic.v0_3.BioimageioConfig",
"type": "object"
},
"bioimageio__spec__generic__v0_3__CiteEntry": {
"additionalProperties": false,
"description": "A citation that should be referenced in work using this resource.",
"properties": {
"text": {
"description": "free text description",
"title": "Text",
"type": "string"
},
"doi": {
"anyOf": [
{
"description": "A digital object identifier, see https://www.doi.org/",
"pattern": "^10\\.[0-9]{4}.+$",
"title": "Doi",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details.\nNote:\n Either **doi** or **url** have to be specified.",
"title": "Doi"
},
"url": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to cite (preferably specify a **doi** instead/also).\nNote:\n Either **doi** or **url** have to be specified.",
"title": "Url"
}
},
"required": [
"text"
],
"title": "generic.v0_3.CiteEntry",
"type": "object"
},
"bioimageio__spec__generic__v0_3__Config": {
"additionalProperties": true,
"description": "A place to store additional metadata (often tool specific).\n\nSuch additional metadata is typically set programmatically by the respective tool\nor by people with specific insights into the tool.\nIf you want to store additional metadata that does not match any of the other\nfields, think of a key unlikely to collide with anyone elses use-case/tool and save\nit here.\n\nPlease consider creating [an issue in the bioimageio.spec repository](https://github.com/bioimage-io/spec-bioimage-io/issues/new?template=Blank+issue)\nif you are not sure if an existing field could cover your use case\nor if you think such a field should exist.",
"properties": {
"bioimageio": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__BioimageioConfig"
}
},
"title": "generic.v0_3.Config",
"type": "object"
},
"bioimageio__spec__generic__v0_3__Maintainer": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Name"
},
"github_user": {
"title": "Github User",
"type": "string"
}
},
"required": [
"github_user"
],
"title": "generic.v0_3.Maintainer",
"type": "object"
},
"bioimageio__spec__model__v0_5__BioimageioConfig": {
"additionalProperties": true,
"properties": {
"reproducibility_tolerance": {
"default": [],
"description": "Tolerances to allow when reproducing the model's test outputs\nfrom the model's test inputs.\nOnly the first entry matching tensor id and weights format is considered.",
"items": {
"$ref": "#/$defs/ReproducibilityTolerance"
},
"title": "Reproducibility Tolerance",
"type": "array"
}
},
"title": "model.v0_5.BioimageioConfig",
"type": "object"
},
"bioimageio__spec__model__v0_5__Config": {
"additionalProperties": true,
"properties": {
"bioimageio": {
"$ref": "#/$defs/bioimageio__spec__model__v0_5__BioimageioConfig"
}
},
"title": "model.v0_5.Config",
"type": "object"
}
},
"additionalProperties": false,
"description": "Specification of the fields used in a bioimage.io-compliant RDF to describe AI models with pretrained weights.\nThese fields are typically stored in a YAML file which we call a model resource description file (model RDF).",
"properties": {
"name": {
"description": "A human-readable name of this model.\nIt should be no longer than 64 characters\nand may only contain letter, number, underscore, minus, parentheses and spaces.\nWe recommend to chose a name that refers to the model's task and image modality.",
"maxLength": 128,
"minLength": 5,
"title": "Name",
"type": "string"
},
"description": {
"default": "",
"description": "A string containing a brief description.",
"maxLength": 1024,
"title": "Description",
"type": "string"
},
"covers": {
"description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1 or 1:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg')",
"examples": [
[
"cover.png"
]
],
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Covers",
"type": "array"
},
"id_emoji": {
"anyOf": [
{
"examples": [
"\ud83e\udd88",
"\ud83e\udda5"
],
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "UTF-8 emoji for display alongside the `id`.",
"title": "Id Emoji"
},
"authors": {
"description": "The authors are the creators of the model RDF and the primary points of contact.",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"title": "Authors",
"type": "array"
},
"attachments": {
"description": "file attachments",
"items": {
"$ref": "#/$defs/FileDescr"
},
"title": "Attachments",
"type": "array"
},
"cite": {
"description": "citations",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__CiteEntry"
},
"title": "Cite",
"type": "array"
},
"license": {
"anyOf": [
{
"enum": [
"0BSD",
"AAL",
"Abstyles",
"AdaCore-doc",
"Adobe-2006",
"Adobe-Display-PostScript",
"Adobe-Glyph",
"Adobe-Utopia",
"ADSL",
"AFL-1.1",
"AFL-1.2",
"AFL-2.0",
"AFL-2.1",
"AFL-3.0",
"Afmparse",
"AGPL-1.0-only",
"AGPL-1.0-or-later",
"AGPL-3.0-only",
"AGPL-3.0-or-later",
"Aladdin",
"AMDPLPA",
"AML",
"AML-glslang",
"AMPAS",
"ANTLR-PD",
"ANTLR-PD-fallback",
"Apache-1.0",
"Apache-1.1",
"Apache-2.0",
"APAFML",
"APL-1.0",
"App-s2p",
"APSL-1.0",
"APSL-1.1",
"APSL-1.2",
"APSL-2.0",
"Arphic-1999",
"Artistic-1.0",
"Artistic-1.0-cl8",
"Artistic-1.0-Perl",
"Artistic-2.0",
"ASWF-Digital-Assets-1.0",
"ASWF-Digital-Assets-1.1",
"Baekmuk",
"Bahyph",
"Barr",
"bcrypt-Solar-Designer",
"Beerware",
"Bitstream-Charter",
"Bitstream-Vera",
"BitTorrent-1.0",
"BitTorrent-1.1",
"blessing",
"BlueOak-1.0.0",
"Boehm-GC",
"Borceux",
"Brian-Gladman-2-Clause",
"Brian-Gladman-3-Clause",
"BSD-1-Clause",
"BSD-2-Clause",
"BSD-2-Clause-Darwin",
"BSD-2-Clause-Patent",
"BSD-2-Clause-Views",
"BSD-3-Clause",
"BSD-3-Clause-acpica",
"BSD-3-Clause-Attribution",
"BSD-3-Clause-Clear",
"BSD-3-Clause-flex",
"BSD-3-Clause-HP",
"BSD-3-Clause-LBNL",
"BSD-3-Clause-Modification",
"BSD-3-Clause-No-Military-License",
"BSD-3-Clause-No-Nuclear-License",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-No-Nuclear-Warranty",
"BSD-3-Clause-Open-MPI",
"BSD-3-Clause-Sun",
"BSD-4-Clause",
"BSD-4-Clause-Shortened",
"BSD-4-Clause-UC",
"BSD-4.3RENO",
"BSD-4.3TAHOE",
"BSD-Advertising-Acknowledgement",
"BSD-Attribution-HPND-disclaimer",
"BSD-Inferno-Nettverk",
"BSD-Protection",
"BSD-Source-beginning-file",
"BSD-Source-Code",
"BSD-Systemics",
"BSD-Systemics-W3Works",
"BSL-1.0",
"BUSL-1.1",
"bzip2-1.0.6",
"C-UDA-1.0",
"CAL-1.0",
"CAL-1.0-Combined-Work-Exception",
"Caldera",
"Caldera-no-preamble",
"CATOSL-1.1",
"CC-BY-1.0",
"CC-BY-2.0",
"CC-BY-2.5",
"CC-BY-2.5-AU",
"CC-BY-3.0",
"CC-BY-3.0-AT",
"CC-BY-3.0-AU",
"CC-BY-3.0-DE",
"CC-BY-3.0-IGO",
"CC-BY-3.0-NL",
"CC-BY-3.0-US",
"CC-BY-4.0",
"CC-BY-NC-1.0",
"CC-BY-NC-2.0",
"CC-BY-NC-2.5",
"CC-BY-NC-3.0",
"CC-BY-NC-3.0-DE",
"CC-BY-NC-4.0",
"CC-BY-NC-ND-1.0",
"CC-BY-NC-ND-2.0",
"CC-BY-NC-ND-2.5",
"CC-BY-NC-ND-3.0",
"CC-BY-NC-ND-3.0-DE",
"CC-BY-NC-ND-3.0-IGO",
"CC-BY-NC-ND-4.0",
"CC-BY-NC-SA-1.0",
"CC-BY-NC-SA-2.0",
"CC-BY-NC-SA-2.0-DE",
"CC-BY-NC-SA-2.0-FR",
"CC-BY-NC-SA-2.0-UK",
"CC-BY-NC-SA-2.5",
"CC-BY-NC-SA-3.0",
"CC-BY-NC-SA-3.0-DE",
"CC-BY-NC-SA-3.0-IGO",
"CC-BY-NC-SA-4.0",
"CC-BY-ND-1.0",
"CC-BY-ND-2.0",
"CC-BY-ND-2.5",
"CC-BY-ND-3.0",
"CC-BY-ND-3.0-DE",
"CC-BY-ND-4.0",
"CC-BY-SA-1.0",
"CC-BY-SA-2.0",
"CC-BY-SA-2.0-UK",
"CC-BY-SA-2.1-JP",
"CC-BY-SA-2.5",
"CC-BY-SA-3.0",
"CC-BY-SA-3.0-AT",
"CC-BY-SA-3.0-DE",
"CC-BY-SA-3.0-IGO",
"CC-BY-SA-4.0",
"CC-PDDC",
"CC0-1.0",
"CDDL-1.0",
"CDDL-1.1",
"CDL-1.0",
"CDLA-Permissive-1.0",
"CDLA-Permissive-2.0",
"CDLA-Sharing-1.0",
"CECILL-1.0",
"CECILL-1.1",
"CECILL-2.0",
"CECILL-2.1",
"CECILL-B",
"CECILL-C",
"CERN-OHL-1.1",
"CERN-OHL-1.2",
"CERN-OHL-P-2.0",
"CERN-OHL-S-2.0",
"CERN-OHL-W-2.0",
"CFITSIO",
"check-cvs",
"checkmk",
"ClArtistic",
"Clips",
"CMU-Mach",
"CMU-Mach-nodoc",
"CNRI-Jython",
"CNRI-Python",
"CNRI-Python-GPL-Compatible",
"COIL-1.0",
"Community-Spec-1.0",
"Condor-1.1",
"copyleft-next-0.3.0",
"copyleft-next-0.3.1",
"Cornell-Lossless-JPEG",
"CPAL-1.0",
"CPL-1.0",
"CPOL-1.02",
"Cronyx",
"Crossword",
"CrystalStacker",
"CUA-OPL-1.0",
"Cube",
"curl",
"D-FSL-1.0",
"DEC-3-Clause",
"diffmark",
"DL-DE-BY-2.0",
"DL-DE-ZERO-2.0",
"DOC",
"Dotseqn",
"DRL-1.0",
"DRL-1.1",
"DSDP",
"dtoa",
"dvipdfm",
"ECL-1.0",
"ECL-2.0",
"EFL-1.0",
"EFL-2.0",
"eGenix",
"Elastic-2.0",
"Entessa",
"EPICS",
"EPL-1.0",
"EPL-2.0",
"ErlPL-1.1",
"etalab-2.0",
"EUDatagrid",
"EUPL-1.0",
"EUPL-1.1",
"EUPL-1.2",
"Eurosym",
"Fair",
"FBM",
"FDK-AAC",
"Ferguson-Twofish",
"Frameworx-1.0",
"FreeBSD-DOC",
"FreeImage",
"FSFAP",
"FSFAP-no-warranty-disclaimer",
"FSFUL",
"FSFULLR",
"FSFULLRWD",
"FTL",
"Furuseth",
"fwlw",
"GCR-docs",
"GD",
"GFDL-1.1-invariants-only",
"GFDL-1.1-invariants-or-later",
"GFDL-1.1-no-invariants-only",
"GFDL-1.1-no-invariants-or-later",
"GFDL-1.1-only",
"GFDL-1.1-or-later",
"GFDL-1.2-invariants-only",
"GFDL-1.2-invariants-or-later",
"GFDL-1.2-no-invariants-only",
"GFDL-1.2-no-invariants-or-later",
"GFDL-1.2-only",
"GFDL-1.2-or-later",
"GFDL-1.3-invariants-only",
"GFDL-1.3-invariants-or-later",
"GFDL-1.3-no-invariants-only",
"GFDL-1.3-no-invariants-or-later",
"GFDL-1.3-only",
"GFDL-1.3-or-later",
"Giftware",
"GL2PS",
"Glide",
"Glulxe",
"GLWTPL",
"gnuplot",
"GPL-1.0-only",
"GPL-1.0-or-later",
"GPL-2.0-only",
"GPL-2.0-or-later",
"GPL-3.0-only",
"GPL-3.0-or-later",
"Graphics-Gems",
"gSOAP-1.3b",
"gtkbook",
"HaskellReport",
"hdparm",
"Hippocratic-2.1",
"HP-1986",
"HP-1989",
"HPND",
"HPND-DEC",
"HPND-doc",
"HPND-doc-sell",
"HPND-export-US",
"HPND-export-US-modify",
"HPND-Fenneberg-Livingston",
"HPND-INRIA-IMAG",
"HPND-Kevlin-Henney",
"HPND-Markus-Kuhn",
"HPND-MIT-disclaimer",
"HPND-Pbmplus",
"HPND-sell-MIT-disclaimer-xserver",
"HPND-sell-regexpr",
"HPND-sell-variant",
"HPND-sell-variant-MIT-disclaimer",
"HPND-UC",
"HTMLTIDY",
"IBM-pibs",
"ICU",
"IEC-Code-Components-EULA",
"IJG",
"IJG-short",
"ImageMagick",
"iMatix",
"Imlib2",
"Info-ZIP",
"Inner-Net-2.0",
"Intel",
"Intel-ACPI",
"Interbase-1.0",
"IPA",
"IPL-1.0",
"ISC",
"ISC-Veillard",
"Jam",
"JasPer-2.0",
"JPL-image",
"JPNIC",
"JSON",
"Kastrup",
"Kazlib",
"Knuth-CTAN",
"LAL-1.2",
"LAL-1.3",
"Latex2e",
"Latex2e-translated-notice",
"Leptonica",
"LGPL-2.0-only",
"LGPL-2.0-or-later",
"LGPL-2.1-only",
"LGPL-2.1-or-later",
"LGPL-3.0-only",
"LGPL-3.0-or-later",
"LGPLLR",
"Libpng",
"libpng-2.0",
"libselinux-1.0",
"libtiff",
"libutil-David-Nugent",
"LiLiQ-P-1.1",
"LiLiQ-R-1.1",
"LiLiQ-Rplus-1.1",
"Linux-man-pages-1-para",
"Linux-man-pages-copyleft",
"Linux-man-pages-copyleft-2-para",
"Linux-man-pages-copyleft-var",
"Linux-OpenIB",
"LOOP",
"LPD-document",
"LPL-1.0",
"LPL-1.02",
"LPPL-1.0",
"LPPL-1.1",
"LPPL-1.2",
"LPPL-1.3a",
"LPPL-1.3c",
"lsof",
"Lucida-Bitmap-Fonts",
"LZMA-SDK-9.11-to-9.20",
"LZMA-SDK-9.22",
"Mackerras-3-Clause",
"Mackerras-3-Clause-acknowledgment",
"magaz",
"mailprio",
"MakeIndex",
"Martin-Birgmeier",
"McPhee-slideshow",
"metamail",
"Minpack",
"MirOS",
"MIT",
"MIT-0",
"MIT-advertising",
"MIT-CMU",
"MIT-enna",
"MIT-feh",
"MIT-Festival",
"MIT-Modern-Variant",
"MIT-open-group",
"MIT-testregex",
"MIT-Wu",
"MITNFA",
"MMIXware",
"Motosoto",
"MPEG-SSG",
"mpi-permissive",
"mpich2",
"MPL-1.0",
"MPL-1.1",
"MPL-2.0",
"MPL-2.0-no-copyleft-exception",
"mplus",
"MS-LPL",
"MS-PL",
"MS-RL",
"MTLL",
"MulanPSL-1.0",
"MulanPSL-2.0",
"Multics",
"Mup",
"NAIST-2003",
"NASA-1.3",
"Naumen",
"NBPL-1.0",
"NCGL-UK-2.0",
"NCSA",
"Net-SNMP",
"NetCDF",
"Newsletr",
"NGPL",
"NICTA-1.0",
"NIST-PD",
"NIST-PD-fallback",
"NIST-Software",
"NLOD-1.0",
"NLOD-2.0",
"NLPL",
"Nokia",
"NOSL",
"Noweb",
"NPL-1.0",
"NPL-1.1",
"NPOSL-3.0",
"NRL",
"NTP",
"NTP-0",
"O-UDA-1.0",
"OCCT-PL",
"OCLC-2.0",
"ODbL-1.0",
"ODC-By-1.0",
"OFFIS",
"OFL-1.0",
"OFL-1.0-no-RFN",
"OFL-1.0-RFN",
"OFL-1.1",
"OFL-1.1-no-RFN",
"OFL-1.1-RFN",
"OGC-1.0",
"OGDL-Taiwan-1.0",
"OGL-Canada-2.0",
"OGL-UK-1.0",
"OGL-UK-2.0",
"OGL-UK-3.0",
"OGTSL",
"OLDAP-1.1",
"OLDAP-1.2",
"OLDAP-1.3",
"OLDAP-1.4",
"OLDAP-2.0",
"OLDAP-2.0.1",
"OLDAP-2.1",
"OLDAP-2.2",
"OLDAP-2.2.1",
"OLDAP-2.2.2",
"OLDAP-2.3",
"OLDAP-2.4",
"OLDAP-2.5",
"OLDAP-2.6",
"OLDAP-2.7",
"OLDAP-2.8",
"OLFL-1.3",
"OML",
"OpenPBS-2.3",
"OpenSSL",
"OpenSSL-standalone",
"OpenVision",
"OPL-1.0",
"OPL-UK-3.0",
"OPUBL-1.0",
"OSET-PL-2.1",
"OSL-1.0",
"OSL-1.1",
"OSL-2.0",
"OSL-2.1",
"OSL-3.0",
"PADL",
"Parity-6.0.0",
"Parity-7.0.0",
"PDDL-1.0",
"PHP-3.0",
"PHP-3.01",
"Pixar",
"Plexus",
"pnmstitch",
"PolyForm-Noncommercial-1.0.0",
"PolyForm-Small-Business-1.0.0",
"PostgreSQL",
"PSF-2.0",
"psfrag",
"psutils",
"Python-2.0",
"Python-2.0.1",
"python-ldap",
"Qhull",
"QPL-1.0",
"QPL-1.0-INRIA-2004",
"radvd",
"Rdisc",
"RHeCos-1.1",
"RPL-1.1",
"RPL-1.5",
"RPSL-1.0",
"RSA-MD",
"RSCPL",
"Ruby",
"SAX-PD",
"SAX-PD-2.0",
"Saxpath",
"SCEA",
"SchemeReport",
"Sendmail",
"Sendmail-8.23",
"SGI-B-1.0",
"SGI-B-1.1",
"SGI-B-2.0",
"SGI-OpenGL",
"SGP4",
"SHL-0.5",
"SHL-0.51",
"SimPL-2.0",
"SISSL",
"SISSL-1.2",
"SL",
"Sleepycat",
"SMLNJ",
"SMPPL",
"SNIA",
"snprintf",
"softSurfer",
"Soundex",
"Spencer-86",
"Spencer-94",
"Spencer-99",
"SPL-1.0",
"ssh-keyscan",
"SSH-OpenSSH",
"SSH-short",
"SSLeay-standalone",
"SSPL-1.0",
"SugarCRM-1.1.3",
"Sun-PPP",
"SunPro",
"SWL",
"swrule",
"Symlinks",
"TAPR-OHL-1.0",
"TCL",
"TCP-wrappers",
"TermReadKey",
"TGPPL-1.0",
"TMate",
"TORQUE-1.1",
"TOSL",
"TPDL",
"TPL-1.0",
"TTWL",
"TTYP0",
"TU-Berlin-1.0",
"TU-Berlin-2.0",
"UCAR",
"UCL-1.0",
"ulem",
"UMich-Merit",
"Unicode-3.0",
"Unicode-DFS-2015",
"Unicode-DFS-2016",
"Unicode-TOU",
"UnixCrypt",
"Unlicense",
"UPL-1.0",
"URT-RLE",
"Vim",
"VOSTROM",
"VSL-1.0",
"W3C",
"W3C-19980720",
"W3C-20150513",
"w3m",
"Watcom-1.0",
"Widget-Workshop",
"Wsuipa",
"WTFPL",
"X11",
"X11-distribute-modifications-variant",
"Xdebug-1.03",
"Xerox",
"Xfig",
"XFree86-1.1",
"xinetd",
"xkeyboard-config-Zinoviev",
"xlock",
"Xnet",
"xpp",
"XSkat",
"YPL-1.0",
"YPL-1.1",
"Zed",
"Zeeff",
"Zend-2.0",
"Zimbra-1.3",
"Zimbra-1.4",
"Zlib",
"zlib-acknowledgement",
"ZPL-1.1",
"ZPL-2.0",
"ZPL-2.1"
],
"title": "LicenseId",
"type": "string"
},
{
"enum": [
"AGPL-1.0",
"AGPL-3.0",
"BSD-2-Clause-FreeBSD",
"BSD-2-Clause-NetBSD",
"bzip2-1.0.5",
"eCos-2.0",
"GFDL-1.1",
"GFDL-1.2",
"GFDL-1.3",
"GPL-1.0",
"GPL-1.0+",
"GPL-2.0",
"GPL-2.0+",
"GPL-2.0-with-autoconf-exception",
"GPL-2.0-with-bison-exception",
"GPL-2.0-with-classpath-exception",
"GPL-2.0-with-font-exception",
"GPL-2.0-with-GCC-exception",
"GPL-3.0",
"GPL-3.0+",
"GPL-3.0-with-autoconf-exception",
"GPL-3.0-with-GCC-exception",
"LGPL-2.0",
"LGPL-2.0+",
"LGPL-2.1",
"LGPL-2.1+",
"LGPL-3.0",
"LGPL-3.0+",
"Nunit",
"StandardML-NJ",
"wxWindows"
],
"title": "DeprecatedLicenseId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose)\nto discuss your intentions with the community.",
"examples": [
"CC0-1.0",
"MIT",
"BSD-2-Clause"
],
"title": "License"
},
"git_repo": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A URL to the Git repository where the resource is being developed.",
"examples": [
"https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
],
"title": "Git Repo"
},
"icon": {
"anyOf": [
{
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An icon for illustration, e.g. on bioimage.io",
"title": "Icon"
},
"links": {
"description": "IDs of other bioimage.io resources",
"examples": [
[
"ilastik/ilastik",
"deepimagej/deepimagej",
"zero/notebook_u-net_3d_zerocostdl4mic"
]
],
"items": {
"type": "string"
},
"title": "Links",
"type": "array"
},
"uploader": {
"anyOf": [
{
"$ref": "#/$defs/Uploader"
},
{
"type": "null"
}
],
"default": null,
"description": "The person who uploaded the model (e.g. to bioimage.io)"
},
"maintainers": {
"description": "Maintainers of this resource.\nIf not specified, `authors` are maintainers and at least some of them has to specify their `github_user` name",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Maintainer"
},
"title": "Maintainers",
"type": "array"
},
"tags": {
"description": "Associated tags",
"examples": [
[
"unet2d",
"pytorch",
"nucleus",
"segmentation",
"dsb2018"
]
],
"items": {
"type": "string"
},
"title": "Tags",
"type": "array"
},
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the resource following SemVer 2.0."
},
"version_comment": {
"anyOf": [
{
"maxLength": 512,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A comment on the version of the resource.",
"title": "Version Comment"
},
"format_version": {
"const": "0.5.6",
"description": "Version of the bioimage.io model description specification used.\nWhen creating a new model always use the latest micro/patch version described here.\nThe `format_version` is important for any consumer software to understand how to parse the fields.",
"title": "Format Version",
"type": "string"
},
"type": {
"const": "model",
"description": "Specialized resource type 'model'",
"title": "Type",
"type": "string"
},
"id": {
"anyOf": [
{
"minLength": 1,
"title": "ModelId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
"title": "Id"
},
"documentation": {
"anyOf": [
{
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"examples": [
"https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
"README.md"
]
},
{
"type": "null"
}
],
"default": null,
"description": "URL or relative path to a markdown file with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.\nThe documentation should include a '#[#] Validation' (sub)section\nwith details on how to quantitatively validate the model on unseen data.",
"title": "Documentation"
},
"inputs": {
"description": "Describes the input tensors expected by this model.",
"items": {
"$ref": "#/$defs/InputTensorDescr"
},
"minItems": 1,
"title": "Inputs",
"type": "array"
},
"outputs": {
"description": "Describes the output tensors.",
"items": {
"$ref": "#/$defs/OutputTensorDescr"
},
"minItems": 1,
"title": "Outputs",
"type": "array"
},
"packaged_by": {
"description": "The persons that have packaged and uploaded this model.\nOnly required if those persons differ from the `authors`.",
"items": {
"$ref": "#/$defs/bioimageio__spec__generic__v0_3__Author"
},
"title": "Packaged By",
"type": "array"
},
"parent": {
"anyOf": [
{
"$ref": "#/$defs/LinkedModel"
},
{
"type": "null"
}
],
"default": null,
"description": "The model from which this model is derived, e.g. by fine-tuning the weights."
},
"run_mode": {
"anyOf": [
{
"$ref": "#/$defs/RunMode"
},
{
"type": "null"
}
],
"default": null,
"description": "Custom run mode for this model: for more complex prediction procedures like test time\ndata augmentation that currently cannot be expressed in the specification.\nNo standard run modes are defined yet."
},
"timestamp": {
"$ref": "#/$defs/Datetime",
"description": "Timestamp in [ISO 8601](#https://en.wikipedia.org/wiki/ISO_8601) format\nwith a few restrictions listed [here](https://docs.python.org/3/library/datetime.html#datetime.datetime.fromisoformat).\n(In Python a datetime object is valid, too)."
},
"training_data": {
"anyOf": [
{
"$ref": "#/$defs/LinkedDataset"
},
{
"$ref": "#/$defs/bioimageio__spec__dataset__v0_3__DatasetDescr"
},
{
"$ref": "#/$defs/bioimageio__spec__dataset__v0_2__DatasetDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "The dataset used to train this model",
"title": "Training Data"
},
"weights": {
"$ref": "#/$defs/WeightsDescr",
"description": "The weights for this model.\nWeights can be given for different formats, but should otherwise be equivalent.\nThe available weight formats determine which consumers can use this model."
},
"config": {
"$ref": "#/$defs/bioimageio__spec__model__v0_5__Config"
}
},
"required": [
"name",
"format_version",
"type",
"inputs",
"outputs",
"weights"
],
"title": "model 0.5.6",
"type": "object"
}
Fields:
-
_validation_summary(Optional[ValidationSummary]) -
_root(Union[RootHttpUrl, DirectoryPath, ZipFile]) -
_file_name(Optional[FileName]) -
description(FAIR[str]) -
covers(List[FileSource_cover]) -
id_emoji(Optional[str]) -
attachments(List[FileDescr_]) -
cite(FAIR[List[CiteEntry]]) -
license(FAIR[Union[LicenseId, DeprecatedLicenseId, None]]) -
git_repo(Optional[HttpUrl]) -
icon(Union[str, FileSource_, None]) -
links(List[str]) -
uploader(Optional[Uploader]) -
maintainers(List[Maintainer]) -
tags(FAIR[List[str]]) -
version(Optional[Version]) -
version_comment(Optional[str]) -
format_version(Literal['0.5.6']) -
type(Literal['model']) -
id(Optional[ModelId]) -
authors(FAIR[List[Author]]) -
documentation(FAIR[Optional[FileSource_documentation]]) -
inputs(NotEmpty[Sequence[InputTensorDescr]]) -
name(str) -
outputs(NotEmpty[Sequence[OutputTensorDescr]]) -
packaged_by(List[Author]) -
parent(Optional[LinkedModel]) -
run_mode(Optional[RunMode]) -
timestamp(Datetime) -
training_data(Union[None, LinkedDataset, DatasetDescr, DatasetDescr02]) -
weights(WeightsDescr) -
config(Config)
Validators:
-
_validate_documentation→documentation -
_validate_input_axes→inputs -
_validate_test_tensors -
_validate_tensor_references_in_proc_kwargs -
_validate_tensor_ids→outputs -
_validate_output_axes→outputs -
_validate_parent_is_not_self -
_add_default_cover -
_convert
authors
pydantic-field
¤
The authors are the creators of the model RDF and the primary points of contact.
documentation
pydantic-field
¤
documentation: FAIR[Optional[FileSource_documentation]] = (
None
)
URL or relative path to a markdown file with additional documentation.
The recommended documentation file name is README.md. An .md suffix is mandatory.
The documentation should include a '#[#] Validation' (sub)section
with details on how to quantitatively validate the model on unseen data.
file_name
property
¤
file_name: Optional[FileName]
File name of the bioimageio.yaml file the description was loaded from.
git_repo
pydantic-field
¤
git_repo: Optional[HttpUrl] = None
A URL to the Git repository where the resource is being developed.
icon
pydantic-field
¤
icon: Union[str, FileSource_, None] = None
An icon for illustration, e.g. on bioimage.io
id
pydantic-field
¤
id: Optional[ModelId] = None
bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.
implemented_format_version_tuple
class-attribute
¤
implemented_format_version_tuple: Tuple[int, int, int]
inputs
pydantic-field
¤
inputs: NotEmpty[Sequence[InputTensorDescr]]
Describes the input tensors expected by this model.
license
pydantic-field
¤
license: FAIR[
Union[LicenseId, DeprecatedLicenseId, None]
] = None
A SPDX license identifier. We do not support custom license beyond the SPDX license list, if you need that please open a GitHub issue to discuss your intentions with the community.
maintainers
pydantic-field
¤
maintainers: List[Maintainer]
Maintainers of this resource.
If not specified, authors are maintainers and at least some of them has to specify their github_user name
name
pydantic-field
¤
name: str
A human-readable name of this model. It should be no longer than 64 characters and may only contain letter, number, underscore, minus, parentheses and spaces. We recommend to chose a name that refers to the model's task and image modality.
outputs
pydantic-field
¤
outputs: NotEmpty[Sequence[OutputTensorDescr]]
Describes the output tensors.
packaged_by
pydantic-field
¤
packaged_by: List[Author]
The persons that have packaged and uploaded this model.
Only required if those persons differ from the authors.
parent
pydantic-field
¤
parent: Optional[LinkedModel] = None
The model from which this model is derived, e.g. by fine-tuning the weights.
root
property
¤
root: Union[RootHttpUrl, DirectoryPath, ZipFile]
The URL/Path prefix to resolve any relative paths with.
run_mode
pydantic-field
¤
run_mode: Optional[RunMode] = None
Custom run mode for this model: for more complex prediction procedures like test time data augmentation that currently cannot be expressed in the specification. No standard run modes are defined yet.
training_data
pydantic-field
¤
training_data: Union[
None, LinkedDataset, DatasetDescr, DatasetDescr02
] = None
The dataset used to train this model
uploader
pydantic-field
¤
uploader: Optional[Uploader] = None
The person who uploaded the model (e.g. to bioimage.io)
version
pydantic-field
¤
version: Optional[Version] = None
The version of the resource following SemVer 2.0.
version_comment
pydantic-field
¤
version_comment: Optional[str] = None
A comment on the version of the resource.
weights
pydantic-field
¤
weights: WeightsDescr
The weights for this model. Weights can be given for different formats, but should otherwise be equivalent. The available weight formats determine which consumers can use this model.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any)
Source code in src/bioimageio/spec/_internal/common_nodes.py
199 200 201 202 203 204 205 206 207 208 209 210 211 | |
convert_from_old_format_wo_validation
classmethod
¤
convert_from_old_format_wo_validation(
data: Dict[str, Any],
) -> None
Convert metadata following an older format version to this classes' format without validating the result.
Source code in src/bioimageio/spec/model/v0_5.py
3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 | |
get_axis_sizes
¤
get_axis_sizes(
ns: Mapping[
Tuple[TensorId, AxisId], ParameterizedSize_N
],
batch_size: Optional[int] = None,
*,
max_input_shape: Optional[
Mapping[Tuple[TensorId, AxisId], int]
] = None,
) -> _AxisSizes
Determine input and output block shape for scale factors ns of parameterized input sizes.
| PARAMETER | DESCRIPTION |
|---|---|
|
Scale factor
TYPE:
|
|
The desired size of the batch dimension. If given batch_size overwrites any batch size present in max_input_shape. Default 1.
TYPE:
|
|
Limits the derived block shapes.
Each axis for which the input size, parameterized by
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
_AxisSizes
|
Resolved axis sizes for model inputs and outputs. |
Source code in src/bioimageio/spec/model/v0_5.py
3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 | |
get_batch_size
staticmethod
¤
Source code in src/bioimageio/spec/model/v0_5.py
3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 | |
get_input_test_arrays
¤
get_input_test_arrays() -> List[NDArray[Any]]
Source code in src/bioimageio/spec/model/v0_5.py
3152 3153 | |
get_ns
¤
get parameter n for each parameterized axis
such that the valid input size is >= the given input size
Source code in src/bioimageio/spec/model/v0_5.py
3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 | |
get_output_tensor_sizes
¤
get_output_tensor_sizes(
input_sizes: Mapping[TensorId, Mapping[AxisId, int]],
) -> Dict[TensorId, Dict[AxisId, Union[int, _DataDepSize]]]
Returns the tensor output sizes for given input_sizes. Only if input_sizes has a valid input shape, the tensor output size is exact. Otherwise it might be larger than the actual (valid) output
Source code in src/bioimageio/spec/model/v0_5.py
3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 | |
get_output_test_arrays
¤
get_output_test_arrays() -> List[NDArray[Any]]
Source code in src/bioimageio/spec/model/v0_5.py
3155 3156 | |
get_package_content
¤
get_package_content() -> Dict[
FileName, Union[FileDescr, BioimageioYamlContent]
]
Returns package content without creating the package.
Source code in src/bioimageio/spec/_internal/common_nodes.py
377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 | |
get_tensor_sizes
¤
get_tensor_sizes(
ns: Mapping[
Tuple[TensorId, AxisId], ParameterizedSize_N
],
batch_size: int,
) -> _TensorSizes
Source code in src/bioimageio/spec/model/v0_5.py
3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 | |
load
classmethod
¤
load(
data: BioimageioYamlContentView,
context: Optional[ValidationContext] = None,
) -> Union[Self, InvalidDescr]
factory method to create a resource description object
Source code in src/bioimageio/spec/_internal/common_nodes.py
213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
package
¤
package(
dest: Optional[
Union[ZipFile, IO[bytes], Path, str]
] = None,
) -> ZipFile
package the described resource as a zip archive
| PARAMETER | DESCRIPTION |
|---|---|
|
(path/bytes stream of) destination zipfile
TYPE:
|
Source code in src/bioimageio/spec/_internal/common_nodes.py
347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 | |
warn_about_tag_categories
classmethod
¤
warn_about_tag_categories(
value: List[str], info: ValidationInfo
) -> List[str]
Source code in src/bioimageio/spec/generic/v0_3.py
384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 | |
ModelId
¤
Bases: ResourceId
flowchart TD
bioimageio.spec.model.v0_5.ModelId[ModelId]
bioimageio.spec.generic.v0_3.ResourceId[ResourceId]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec.generic.v0_3.ResourceId --> bioimageio.spec.model.v0_5.ModelId
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.generic.v0_3.ResourceId
click bioimageio.spec.model.v0_5.ModelId href "" "bioimageio.spec.model.v0_5.ModelId"
click bioimageio.spec.generic.v0_3.ResourceId href "" "bioimageio.spec.generic.v0_3.ResourceId"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
root_model |
TYPE:
|
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
Node
pydantic-model
¤
Bases: pydantic.BaseModel
Show JSON schema:
{
"additionalProperties": false,
"properties": {},
"title": "_internal.node.Node",
"type": "object"
}
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
NodeWithExplicitlySetFields
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"additionalProperties": false,
"properties": {},
"title": "_internal.common_nodes.NodeWithExplicitlySetFields",
"type": "object"
}
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
NominalOrOrdinalDataDescr
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"values": {
"anyOf": [
{
"items": {
"type": "integer"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "boolean"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "string"
},
"minItems": 1,
"type": "array"
}
],
"description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
"title": "Values"
},
"type": {
"default": "uint8",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"examples": [
"float32",
"uint8",
"uint16",
"int64",
"bool"
],
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
}
},
"required": [
"values"
],
"title": "model.v0_5.NominalOrOrdinalDataDescr",
"type": "object"
}
Fields:
Validators:
-
_validate_values_match_type
values
pydantic-field
¤
values: TVs
A fixed set of nominal or an ascending sequence of ordinal values.
In this case data.type is required to be an unsigend integer type, e.g. 'uint8'.
String values are interpreted as labels for tensor values 0, ..., N.
Note: as YAML 1.2 does not natively support a "set" datatype,
nominal values should be given as a sequence (aka list/array) as well.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
OnnxWeightsDescr
pydantic-model
¤
Bases: WeightsEntryDescrBase
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"opset_version": {
"description": "ONNX opset version",
"minimum": 7,
"title": "Opset Version",
"type": "integer"
},
"external_data": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "weights.onnx.data"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Source of the external ONNX data file holding the weights.\n(If present **source** holds the ONNX architecture without weights)."
}
},
"required": [
"source",
"opset_version"
],
"title": "model.v0_5.OnnxWeightsDescr",
"type": "object"
}
Fields:
-
source(FileSource) -
sha256(Optional[Sha256]) -
authors(Optional[List[Author]]) -
parent(Optional[WeightsFormat]) -
comment(str) -
opset_version(int) -
external_data(Optional[FileDescr_external_data])
Validators:
-
_validate -
_validate_external_data_unique_file_name
authors
pydantic-field
¤
authors: Optional[List[Author]] = None
Authors
Either the person(s) that have trained this model resulting in the original weights file.
(If this is the initial weights entry, i.e. it does not have a parent)
Or the person(s) who have converted the weights to this weights format.
(If this is a child weight, i.e. it has a parent field)
comment
pydantic-field
¤
comment: str = ''
A comment about this weights entry, for example how these weights were created.
external_data
pydantic-field
¤
external_data: Optional[FileDescr_external_data] = None
Source of the external ONNX data file holding the weights. (If present source holds the ONNX architecture without weights).
parent
pydantic-field
¤
parent: Optional[WeightsFormat] = None
The source weights these weights were converted from.
For example, if a model's weights were converted from the pytorch_state_dict format to torchscript,
The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights.
All weight entries except one (the initial set of weights resulting from training the model),
need to have this field.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
306 307 308 309 310 311 312 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
298 299 300 301 302 303 304 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 | |
OrcidId
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.OrcidId[OrcidId]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.OrcidId
click bioimageio.spec.model.v0_5.OrcidId href "" "bioimageio.spec.model.v0_5.OrcidId"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
An ORCID identifier, see https://orcid.org/
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
root_model |
TYPE:
|
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
OutputTensorDescr
pydantic-model
¤
Bases: TensorDescrBase[OutputAxis]
Show JSON schema:
{
"$defs": {
"BatchAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "batch",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "batch",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"const": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
"title": "Size"
}
},
"required": [
"type"
],
"title": "model.v0_5.BatchAxis",
"type": "object"
},
"BinarizeAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for `BinarizeDescr`",
"properties": {
"threshold": {
"description": "The fixed threshold values along `axis`",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Threshold",
"type": "array"
},
"axis": {
"description": "The `threshold` axis",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"threshold",
"axis"
],
"title": "model.v0_5.BinarizeAlongAxisKwargs",
"type": "object"
},
"BinarizeDescr": {
"additionalProperties": false,
"description": "Binarize the tensor with a fixed threshold.\n\nValues above `BinarizeKwargs.threshold`/`BinarizeAlongAxisKwargs.threshold`\nwill be set to one, values below the threshold to zero.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: binarize\n kwargs:\n axis: 'channel'\n threshold: [0.25, 0.5, 0.75]\n ```\n- in Python:\n >>> postprocessing = [BinarizeDescr(\n ... kwargs=BinarizeAlongAxisKwargs(\n ... axis=AxisId('channel'),\n ... threshold=[0.25, 0.5, 0.75],\n ... )\n ... )]",
"properties": {
"id": {
"const": "binarize",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/BinarizeKwargs"
},
{
"$ref": "#/$defs/BinarizeAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.BinarizeDescr",
"type": "object"
},
"BinarizeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `BinarizeDescr`",
"properties": {
"threshold": {
"description": "The fixed threshold",
"title": "Threshold",
"type": "number"
}
},
"required": [
"threshold"
],
"title": "model.v0_5.BinarizeKwargs",
"type": "object"
},
"ChannelAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "channel",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "channel",
"title": "Type",
"type": "string"
},
"channel_names": {
"items": {
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"minItems": 1,
"title": "Channel Names",
"type": "array"
}
},
"required": [
"type",
"channel_names"
],
"title": "model.v0_5.ChannelAxis",
"type": "object"
},
"ClipDescr": {
"additionalProperties": false,
"description": "Set tensor values below min to min and above max to max.\n\nSee `ScaleRangeDescr` for examples.",
"properties": {
"id": {
"const": "clip",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ClipKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ClipDescr",
"type": "object"
},
"ClipKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ClipDescr`",
"properties": {
"min": {
"description": "minimum value for clipping",
"title": "Min",
"type": "number"
},
"max": {
"description": "maximum value for clipping",
"title": "Max",
"type": "number"
}
},
"required": [
"min",
"max"
],
"title": "model.v0_4.ClipKwargs",
"type": "object"
},
"DataDependentSize": {
"additionalProperties": false,
"properties": {
"min": {
"default": 1,
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"max": {
"anyOf": [
{
"exclusiveMinimum": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Max"
}
},
"title": "model.v0_5.DataDependentSize",
"type": "object"
},
"EnsureDtypeDescr": {
"additionalProperties": false,
"description": "Cast the tensor data type to `EnsureDtypeKwargs.dtype` (if not matching).\n\nThis can for example be used to ensure the inner neural network model gets a\ndifferent input tensor data type than the fully described bioimage.io model does.\n\nExamples:\n The described bioimage.io model (incl. preprocessing) accepts any\n float32-compatible tensor, normalizes it with percentiles and clipping and then\n casts it to uint8, which is what the neural network in this example expects.\n - in YAML\n ```yaml\n inputs:\n - data:\n type: float32 # described bioimage.io model is compatible with any float32 input tensor\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n - id: ensure_dtype # the neural network of the model requires uint8\n kwargs:\n dtype: uint8\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(kwargs=ClipKwargs(min=0.0, max=1.0)),\n ... EnsureDtypeDescr(kwargs=EnsureDtypeKwargs(dtype=\"uint8\")),\n ... ]",
"properties": {
"id": {
"const": "ensure_dtype",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/EnsureDtypeKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.EnsureDtypeDescr",
"type": "object"
},
"EnsureDtypeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `EnsureDtypeDescr`",
"properties": {
"dtype": {
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"title": "Dtype",
"type": "string"
}
},
"required": [
"dtype"
],
"title": "model.v0_5.EnsureDtypeKwargs",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"FixedZeroMeanUnitVarianceAlongAxisKwargs": {
"additionalProperties": false,
"description": "key word arguments for `FixedZeroMeanUnitVarianceDescr`",
"properties": {
"mean": {
"description": "The mean value(s) to normalize with.",
"items": {
"type": "number"
},
"minItems": 1,
"title": "Mean",
"type": "array"
},
"std": {
"description": "The standard deviation value(s) to normalize with.\nSize must match `mean` values.",
"items": {
"minimum": 1e-06,
"type": "number"
},
"minItems": 1,
"title": "Std",
"type": "array"
},
"axis": {
"description": "The axis of the mean/std values to normalize each entry along that dimension\nseparately.",
"examples": [
"channel",
"index"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"required": [
"mean",
"std",
"axis"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceAlongAxisKwargs",
"type": "object"
},
"FixedZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract a given mean and divide by the standard deviation.\n\nNormalize with fixed, precomputed values for\n`FixedZeroMeanUnitVarianceKwargs.mean` and `FixedZeroMeanUnitVarianceKwargs.std`\nUse `FixedZeroMeanUnitVarianceAlongAxisKwargs` for independent scaling along given\naxes.\n\nExamples:\n1. scalar value for whole tensor\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n mean: 103.5\n std: 13.7\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceKwargs(mean=103.5, std=13.7)\n ... )]\n\n2. independently along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: fixed_zero_mean_unit_variance\n kwargs:\n axis: channel\n mean: [101.5, 102.5, 103.5]\n std: [11.7, 12.7, 13.7]\n ```\n - in Python\n >>> preprocessing = [FixedZeroMeanUnitVarianceDescr(\n ... kwargs=FixedZeroMeanUnitVarianceAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... mean=[101.5, 102.5, 103.5],\n ... std=[11.7, 12.7, 13.7],\n ... )\n ... )]",
"properties": {
"id": {
"const": "fixed_zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceKwargs"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceDescr",
"type": "object"
},
"FixedZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `FixedZeroMeanUnitVarianceDescr`",
"properties": {
"mean": {
"description": "The mean value to normalize with.",
"title": "Mean",
"type": "number"
},
"std": {
"description": "The standard deviation value to normalize with.",
"minimum": 1e-06,
"title": "Std",
"type": "number"
}
},
"required": [
"mean",
"std"
],
"title": "model.v0_5.FixedZeroMeanUnitVarianceKwargs",
"type": "object"
},
"IndexOutputAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
},
{
"$ref": "#/$defs/DataDependentSize"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (`SizeReference`)\n- data dependent size using `DataDependentSize` (size is only known after model inference)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
}
},
"required": [
"type",
"size"
],
"title": "model.v0_5.IndexOutputAxis",
"type": "object"
},
"IntervalOrRatioDataDescr": {
"additionalProperties": false,
"properties": {
"type": {
"default": "float32",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64"
],
"examples": [
"float32",
"float64",
"uint8",
"uint16"
],
"title": "Type",
"type": "string"
},
"range": {
"default": [
null,
null
],
"description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
"maxItems": 2,
"minItems": 2,
"prefixItems": [
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
},
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
}
],
"title": "Range",
"type": "array"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
}
],
"default": "arbitrary unit",
"title": "Unit"
},
"scale": {
"default": 1.0,
"description": "Scale for data on an interval (or ratio) scale.",
"title": "Scale",
"type": "number"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Offset for data on a ratio scale.",
"title": "Offset"
}
},
"title": "model.v0_5.IntervalOrRatioDataDescr",
"type": "object"
},
"NominalOrOrdinalDataDescr": {
"additionalProperties": false,
"properties": {
"values": {
"anyOf": [
{
"items": {
"type": "integer"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "boolean"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "string"
},
"minItems": 1,
"type": "array"
}
],
"description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
"title": "Values"
},
"type": {
"default": "uint8",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"examples": [
"float32",
"uint8",
"uint16",
"int64",
"bool"
],
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
}
},
"required": [
"values"
],
"title": "model.v0_5.NominalOrOrdinalDataDescr",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"ScaleLinearAlongAxisKwargs": {
"additionalProperties": false,
"description": "Key word arguments for `ScaleLinearDescr`",
"properties": {
"axis": {
"description": "The axis of gain and offset values.",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"gain": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 0.0,
"description": "additive term",
"title": "Offset"
}
},
"required": [
"axis"
],
"title": "model.v0_5.ScaleLinearAlongAxisKwargs",
"type": "object"
},
"ScaleLinearDescr": {
"additionalProperties": false,
"description": "Fixed linear scaling.\n\nExamples:\n 1. Scale with scalar gain and offset\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n gain: 2.0\n offset: 3.0\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0))\n ... ]\n\n 2. Independent scaling along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n axis: 'channel'\n gain: [1.0, 2.0, 3.0]\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(\n ... kwargs=ScaleLinearAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... gain=[1.0, 2.0, 3.0],\n ... )\n ... )\n ... ]",
"properties": {
"id": {
"const": "scale_linear",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/ScaleLinearKwargs"
},
{
"$ref": "#/$defs/ScaleLinearAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ScaleLinearDescr",
"type": "object"
},
"ScaleLinearKwargs": {
"additionalProperties": false,
"description": "Key word arguments for `ScaleLinearDescr`",
"properties": {
"gain": {
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain",
"type": "number"
},
"offset": {
"default": 0.0,
"description": "additive term",
"title": "Offset",
"type": "number"
}
},
"title": "model.v0_5.ScaleLinearKwargs",
"type": "object"
},
"ScaleMeanVarianceDescr": {
"additionalProperties": false,
"description": "Scale a tensor's data distribution to match another tensor's mean/std.\n`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
"properties": {
"id": {
"const": "scale_mean_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleMeanVarianceKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ScaleMeanVarianceDescr",
"type": "object"
},
"ScaleMeanVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleMeanVarianceKwargs`",
"properties": {
"reference_tensor": {
"description": "Name of tensor to match.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability:\n`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"reference_tensor"
],
"title": "model.v0_5.ScaleMeanVarianceKwargs",
"type": "object"
},
"ScaleRangeDescr": {
"additionalProperties": false,
"description": "Scale with percentiles.\n\nExamples:\n1. Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... )\n ... ]\n\n 2. Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: scale_range\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(\n ... kwargs=ClipKwargs(\n ... min=0.0,\n ... max=1.0,\n ... )\n ... ),\n ... ]",
"properties": {
"id": {
"const": "scale_range",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleRangeKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ScaleRangeDescr",
"type": "object"
},
"ScaleRangeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleRangeDescr`\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"min_percentile": {
"default": 0.0,
"description": "The lower percentile used to determine the value to align with zero.",
"exclusiveMaximum": 100,
"minimum": 0,
"title": "Min Percentile",
"type": "number"
},
"max_percentile": {
"default": 100.0,
"description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
"exclusiveMinimum": 1,
"maximum": 100,
"title": "Max Percentile",
"type": "number"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
},
"reference_tensor": {
"anyOf": [
{
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Tensor ID to compute the percentiles from. Default: The tensor itself.\nFor any tensor in `inputs` only input tensor references are allowed.",
"title": "Reference Tensor"
}
},
"title": "model.v0_5.ScaleRangeKwargs",
"type": "object"
},
"SigmoidDescr": {
"additionalProperties": false,
"description": "The logistic sigmoid function, a.k.a. expit function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: sigmoid\n ```\n- in Python:\n >>> postprocessing = [SigmoidDescr()]",
"properties": {
"id": {
"const": "sigmoid",
"title": "Id",
"type": "string"
}
},
"required": [
"id"
],
"title": "model.v0_5.SigmoidDescr",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
},
"SoftmaxDescr": {
"additionalProperties": false,
"description": "The softmax function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: softmax\n kwargs:\n axis: channel\n ```\n- in Python:\n >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId(\"channel\")))]",
"properties": {
"id": {
"const": "softmax",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/SoftmaxKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.SoftmaxDescr",
"type": "object"
},
"SoftmaxKwargs": {
"additionalProperties": false,
"description": "key word arguments for `SoftmaxDescr`",
"properties": {
"axis": {
"default": "channel",
"description": "The axis to apply the softmax function along.\nNote:\n Defaults to 'channel' axis\n (which may not exist, in which case\n a different axis id has to be specified).",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"title": "model.v0_5.SoftmaxKwargs",
"type": "object"
},
"SpaceOutputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxis",
"type": "object"
},
"SpaceOutputAxisWithHalo": {
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxisWithHalo",
"type": "object"
},
"TimeOutputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxis",
"type": "object"
},
"TimeOutputAxisWithHalo": {
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxisWithHalo",
"type": "object"
},
"ZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract mean and divide by variance.\n\nExamples:\n Subtract tensor mean and variance\n - in YAML\n ```yaml\n preprocessing:\n - id: zero_mean_unit_variance\n ```\n - in Python\n >>> preprocessing = [ZeroMeanUnitVarianceDescr()]",
"properties": {
"id": {
"const": "zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ZeroMeanUnitVarianceDescr",
"type": "object"
},
"ZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ZeroMeanUnitVarianceDescr`",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"id": {
"default": "output",
"description": "Output tensor id.\nNo duplicates are allowed across all inputs and outputs.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"description": {
"default": "",
"description": "free text description",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"axes": {
"description": "tensor axes",
"items": {
"discriminator": {
"mapping": {
"batch": "#/$defs/BatchAxis",
"channel": "#/$defs/ChannelAxis",
"index": "#/$defs/IndexOutputAxis",
"space": {
"oneOf": [
{
"$ref": "#/$defs/SpaceOutputAxis"
},
{
"$ref": "#/$defs/SpaceOutputAxisWithHalo"
}
]
},
"time": {
"oneOf": [
{
"$ref": "#/$defs/TimeOutputAxis"
},
{
"$ref": "#/$defs/TimeOutputAxisWithHalo"
}
]
}
},
"propertyName": "type"
},
"oneOf": [
{
"$ref": "#/$defs/BatchAxis"
},
{
"$ref": "#/$defs/ChannelAxis"
},
{
"$ref": "#/$defs/IndexOutputAxis"
},
{
"oneOf": [
{
"$ref": "#/$defs/TimeOutputAxis"
},
{
"$ref": "#/$defs/TimeOutputAxisWithHalo"
}
]
},
{
"oneOf": [
{
"$ref": "#/$defs/SpaceOutputAxis"
},
{
"$ref": "#/$defs/SpaceOutputAxisWithHalo"
}
]
}
]
},
"minItems": 1,
"title": "Axes",
"type": "array"
},
"test_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
},
"sample_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
},
"data": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
},
{
"items": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
}
]
},
"minItems": 1,
"type": "array"
}
],
"default": {
"type": "float32",
"range": [
null,
null
],
"unit": "arbitrary unit",
"scale": 1.0,
"offset": null
},
"description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
"title": "Data"
},
"postprocessing": {
"description": "Description of how this output should be postprocessed.\n\nnote: `postprocessing` always ends with an 'ensure_dtype' operation.\n If not given this is added to cast to this tensor's `data.type`.",
"items": {
"discriminator": {
"mapping": {
"binarize": "#/$defs/BinarizeDescr",
"clip": "#/$defs/ClipDescr",
"ensure_dtype": "#/$defs/EnsureDtypeDescr",
"fixed_zero_mean_unit_variance": "#/$defs/FixedZeroMeanUnitVarianceDescr",
"scale_linear": "#/$defs/ScaleLinearDescr",
"scale_mean_variance": "#/$defs/ScaleMeanVarianceDescr",
"scale_range": "#/$defs/ScaleRangeDescr",
"sigmoid": "#/$defs/SigmoidDescr",
"softmax": "#/$defs/SoftmaxDescr",
"zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
},
"propertyName": "id"
},
"oneOf": [
{
"$ref": "#/$defs/BinarizeDescr"
},
{
"$ref": "#/$defs/ClipDescr"
},
{
"$ref": "#/$defs/EnsureDtypeDescr"
},
{
"$ref": "#/$defs/FixedZeroMeanUnitVarianceDescr"
},
{
"$ref": "#/$defs/ScaleLinearDescr"
},
{
"$ref": "#/$defs/ScaleMeanVarianceDescr"
},
{
"$ref": "#/$defs/ScaleRangeDescr"
},
{
"$ref": "#/$defs/SigmoidDescr"
},
{
"$ref": "#/$defs/SoftmaxDescr"
},
{
"$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
}
]
},
"title": "Postprocessing",
"type": "array"
}
},
"required": [
"axes"
],
"title": "model.v0_5.OutputTensorDescr",
"type": "object"
}
Fields:
-
description(str) -
axes(NotEmpty[Sequence[IO_AxisT]]) -
test_tensor(FAIR[Optional[FileDescr_]]) -
sample_tensor(FAIR[Optional[FileDescr_]]) -
data(Union[TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]]) -
id(TensorId) -
postprocessing(List[PostprocessingDescr])
Validators:
-
_validate_axes→axes -
_validate_sample_tensor -
_check_data_type_across_channels→data -
_check_data_matches_channelaxis -
_validate_postprocessing_kwargs
data
pydantic-field
¤
data: Union[
TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]
]
Description of the tensor's data values, optionally per channel.
If specified per channel, the data type needs to match across channels.
dtype
property
¤
dtype: Literal[
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool",
]
dtype as specified under data.type or data[i].type
id
pydantic-field
¤
id: TensorId
Output tensor id. No duplicates are allowed across all inputs and outputs.
postprocessing
pydantic-field
¤
postprocessing: List[PostprocessingDescr]
Description of how this output should be postprocessed.
postprocessing always ends with an 'ensure_dtype' operation.
If not given this is added to cast to this tensor's data.type.
sample_tensor
pydantic-field
¤
sample_tensor: FAIR[Optional[FileDescr_]] = None
A sample tensor to illustrate a possible input/output for the model,
The sample image primarily serves to inform a human user about an example use case
and is typically stored as .hdf5, .png or .tiff.
It has to be readable by the imageio library
(numpy's .npy format is not supported).
The image dimensionality has to match the number of axes specified in this tensor description.
test_tensor
pydantic-field
¤
test_tensor: FAIR[Optional[FileDescr_]] = None
An example tensor to use for testing. Using the model with the test input tensors is expected to yield the test output tensors. Each test tensor has be a an ndarray in the numpy.lib file format. The file extension must be '.npy'.
get_axis_sizes_for_array
¤
get_axis_sizes_for_array(
array: NDArray[Any],
) -> Dict[AxisId, int]
Source code in src/bioimageio/spec/model/v0_5.py
1685 1686 1687 1688 1689 1690 1691 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
ParameterizedSize
pydantic-model
¤
Bases: Node
Describes a range of valid tensor axis sizes as size = min + n*step.
- min and step are given by the model description.
- All blocksize paramters n = 0,1,2,... yield a valid
size. - A greater blocksize paramter n = 0,1,2,... results in a greater size. This allows to adjust the axis size more generically.
Show JSON schema:
{
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
}
Fields:
get_n
¤
get_n(s: int) -> ParameterizedSize_N
return smallest n parameterizing a size greater or equal than s
Source code in src/bioimageio/spec/model/v0_5.py
328 329 330 | |
get_size
¤
get_size(n: ParameterizedSize_N) -> int
Source code in src/bioimageio/spec/model/v0_5.py
325 326 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
validate_size
¤
validate_size(size: int) -> int
Source code in src/bioimageio/spec/model/v0_5.py
314 315 316 317 318 319 320 321 322 323 | |
ProcessingDescrBase
pydantic-model
¤
Bases: NodeWithExplicitlySetFields, ABC
processing base class
Show JSON schema:
{
"additionalProperties": false,
"description": "processing base class",
"properties": {},
"title": "model.v0_5.ProcessingDescrBase",
"type": "object"
}
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
ProcessingKwargs
pydantic-model
¤
Bases: KwargsNode
base class for pre-/postprocessing key word arguments
Show JSON schema:
{
"additionalProperties": false,
"description": "base class for pre-/postprocessing key word arguments",
"properties": {},
"title": "model.v0_4.ProcessingKwargs",
"type": "object"
}
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
425 426 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
419 420 421 422 423 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
416 417 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
PytorchStateDictWeightsDescr
pydantic-model
¤
Bases: WeightsEntryDescrBase
Show JSON schema:
{
"$defs": {
"ArchitectureFromFileDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Architecture source file",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
}
},
"required": [
"source",
"callable"
],
"title": "model.v0_5.ArchitectureFromFileDescr",
"type": "object"
},
"ArchitectureFromLibraryDescr": {
"additionalProperties": false,
"properties": {
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
},
"import_from": {
"description": "Where to import the callable from, i.e. `from <import_from> import <callable>`",
"title": "Import From",
"type": "string"
}
},
"required": [
"callable",
"import_from"
],
"title": "model.v0_5.ArchitectureFromLibraryDescr",
"type": "object"
},
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
},
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"architecture": {
"anyOf": [
{
"$ref": "#/$defs/ArchitectureFromFileDescr"
},
{
"$ref": "#/$defs/ArchitectureFromLibraryDescr"
}
],
"title": "Architecture"
},
"pytorch_version": {
"$ref": "#/$defs/Version",
"description": "Version of the PyTorch library used.\nIf `architecture.depencencies` is specified it has to include pytorch and any version pinning has to be compatible."
},
"dependencies": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "environment.yaml"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Custom depencies beyond pytorch described in a Conda environment file.\nAllows to specify custom dependencies, see conda docs:\n- [Exporting an environment file across platforms](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#exporting-an-environment-file-across-platforms)\n- [Creating an environment file manually](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually)\n\nThe conda environment file should include pytorch and any version pinning has to be compatible with\n**pytorch_version**."
}
},
"required": [
"source",
"architecture",
"pytorch_version"
],
"title": "model.v0_5.PytorchStateDictWeightsDescr",
"type": "object"
}
Fields:
-
source(FileSource) -
sha256(Optional[Sha256]) -
authors(Optional[List[Author]]) -
parent(Optional[WeightsFormat]) -
comment(str) -
architecture(Union[ArchitectureFromFileDescr, ArchitectureFromLibraryDescr]) -
pytorch_version(Version) -
dependencies(Optional[FileDescr_dependencies])
Validators:
-
_validate
architecture
pydantic-field
¤
architecture: Union[
ArchitectureFromFileDescr, ArchitectureFromLibraryDescr
]
authors
pydantic-field
¤
authors: Optional[List[Author]] = None
Authors
Either the person(s) that have trained this model resulting in the original weights file.
(If this is the initial weights entry, i.e. it does not have a parent)
Or the person(s) who have converted the weights to this weights format.
(If this is a child weight, i.e. it has a parent field)
comment
pydantic-field
¤
comment: str = ''
A comment about this weights entry, for example how these weights were created.
dependencies
pydantic-field
¤
dependencies: Optional[FileDescr_dependencies] = None
Custom depencies beyond pytorch described in a Conda environment file. Allows to specify custom dependencies, see conda docs: - Exporting an environment file across platforms - Creating an environment file manually
The conda environment file should include pytorch and any version pinning has to be compatible with pytorch_version.
parent
pydantic-field
¤
parent: Optional[WeightsFormat] = None
The source weights these weights were converted from.
For example, if a model's weights were converted from the pytorch_state_dict format to torchscript,
The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights.
All weight entries except one (the initial set of weights resulting from training the model),
need to have this field.
pytorch_version
pydantic-field
¤
pytorch_version: Version
Version of the PyTorch library used.
If architecture.depencencies is specified it has to include pytorch and any version pinning has to be compatible.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
306 307 308 309 310 311 312 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
298 299 300 301 302 303 304 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 | |
RelativeFilePath
¤
Bases: RelativePathBase[Union[AbsoluteFilePath, HttpUrl, ZipPath]]
flowchart TD
bioimageio.spec.model.v0_5.RelativeFilePath[RelativeFilePath]
bioimageio.spec._internal.io.RelativePathBase[RelativePathBase]
bioimageio.spec._internal.io.RelativePathBase --> bioimageio.spec.model.v0_5.RelativeFilePath
click bioimageio.spec.model.v0_5.RelativeFilePath href "" "bioimageio.spec.model.v0_5.RelativeFilePath"
click bioimageio.spec._internal.io.RelativePathBase href "" "bioimageio.spec._internal.io.RelativePathBase"
A path relative to the rdf.yaml file (also if the RDF source is a URL).
| METHOD | DESCRIPTION |
|---|---|
__repr__ |
|
__str__ |
|
absolute |
get the absolute path/url |
format |
|
get_absolute |
|
model_post_init |
add validation @private |
| ATTRIBUTE | DESCRIPTION |
|---|---|
path |
TYPE:
|
__repr__
¤
__repr__() -> str
Source code in src/bioimageio/spec/_internal/io.py
148 149 | |
__str__
¤
__str__() -> str
Source code in src/bioimageio/spec/_internal/io.py
145 146 | |
absolute
¤
absolute() -> AbsolutePathT
get the absolute path/url
(resolved at time of initialization with the root of the ValidationContext)
Source code in src/bioimageio/spec/_internal/io.py
123 124 125 126 127 128 129 130 | |
format
¤
format() -> str
Source code in src/bioimageio/spec/_internal/io.py
151 152 153 | |
get_absolute
¤
get_absolute(
root: "RootHttpUrl | Path | AnyUrl | ZipFile",
) -> "AbsoluteFilePath | HttpUrl | ZipPath"
Source code in src/bioimageio/spec/_internal/io.py
215 216 217 218 219 220 221 222 223 224 225 226 227 | |
model_post_init
¤
model_post_init(__context: Any) -> None
add validation @private
Source code in src/bioimageio/spec/_internal/io.py
208 209 210 211 212 213 | |
ReproducibilityTolerance
pydantic-model
¤
Bases: Node
Describes what small numerical differences -- if any -- may be tolerated in the generated output when executing in different environments.
A tensor element output is considered mismatched to the test_tensor if abs(output - test_tensor) > absolute_tolerance + relative_tolerance * abs(test_tensor). (Internally we call numpy.testing.assert_allclose.)
Motivation
For testing we can request the respective deep learning frameworks to be as reproducible as possible by setting seeds and chosing deterministic algorithms, but differences in operating systems, available hardware and installed drivers may still lead to numerical differences.
Show JSON schema:
{
"additionalProperties": true,
"description": "Describes what small numerical differences -- if any -- may be tolerated\nin the generated output when executing in different environments.\n\nA tensor element *output* is considered mismatched to the **test_tensor** if\nabs(*output* - **test_tensor**) > **absolute_tolerance** + **relative_tolerance** * abs(**test_tensor**).\n(Internally we call [numpy.testing.assert_allclose](https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html).)\n\nMotivation:\n For testing we can request the respective deep learning frameworks to be as\n reproducible as possible by setting seeds and chosing deterministic algorithms,\n but differences in operating systems, available hardware and installed drivers\n may still lead to numerical differences.",
"properties": {
"relative_tolerance": {
"default": 0.001,
"description": "Maximum relative tolerance of reproduced test tensor.",
"maximum": 0.01,
"minimum": 0,
"title": "Relative Tolerance",
"type": "number"
},
"absolute_tolerance": {
"default": 0.0001,
"description": "Maximum absolute tolerance of reproduced test tensor.",
"minimum": 0,
"title": "Absolute Tolerance",
"type": "number"
},
"mismatched_elements_per_million": {
"default": 100,
"description": "Maximum number of mismatched elements/pixels per million to tolerate.",
"maximum": 1000,
"minimum": 0,
"title": "Mismatched Elements Per Million",
"type": "integer"
},
"output_ids": {
"default": [],
"description": "Limits the output tensor IDs these reproducibility details apply to.",
"items": {
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"title": "Output Ids",
"type": "array"
},
"weights_formats": {
"default": [],
"description": "Limits the weights formats these details apply to.",
"items": {
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
"title": "Weights Formats",
"type": "array"
}
},
"title": "model.v0_5.ReproducibilityTolerance",
"type": "object"
}
Fields:
-
relative_tolerance(RelativeTolerance) -
absolute_tolerance(AbsoluteTolerance) -
mismatched_elements_per_million(MismatchedElementsPerMillion) -
output_ids(Sequence[TensorId]) -
weights_formats(Sequence[WeightsFormat])
absolute_tolerance
pydantic-field
¤
absolute_tolerance: AbsoluteTolerance = 0.0001
Maximum absolute tolerance of reproduced test tensor.
mismatched_elements_per_million
pydantic-field
¤
mismatched_elements_per_million: MismatchedElementsPerMillion = 100
Maximum number of mismatched elements/pixels per million to tolerate.
output_ids
pydantic-field
¤
output_ids: Sequence[TensorId] = ()
Limits the output tensor IDs these reproducibility details apply to.
relative_tolerance
pydantic-field
¤
relative_tolerance: RelativeTolerance = 0.001
Maximum relative tolerance of reproduced test tensor.
weights_formats
pydantic-field
¤
weights_formats: Sequence[WeightsFormat] = ()
Limits the weights formats these details apply to.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
ResourceId
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.ResourceId[ResourceId]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.ResourceId
click bioimageio.spec.model.v0_5.ResourceId href "" "bioimageio.spec.model.v0_5.ResourceId"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
root_model |
TYPE:
|
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
RestrictCharacters
dataclass
¤
RestrictCharacters(alphabet: str)
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
validate |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
alphabet |
TYPE:
|
__get_pydantic_core_schema__
¤
__get_pydantic_core_schema__(
source: Type[Any], handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validator_annotations.py
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | |
validate
¤
validate(value: str) -> str
Source code in src/bioimageio/spec/_internal/validator_annotations.py
54 55 56 57 | |
RunMode
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"name": {
"anyOf": [
{
"const": "deepimagej",
"type": "string"
},
{
"type": "string"
}
],
"description": "Run mode name",
"title": "Name"
},
"kwargs": {
"additionalProperties": true,
"description": "Run mode specific key word arguments",
"title": "Kwargs",
"type": "object"
}
},
"required": [
"name"
],
"title": "model.v0_4.RunMode",
"type": "object"
}
Fields:
-
name(Union[KnownRunMode, str]) -
kwargs(Dict[str, Any])
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
ScaleLinearAlongAxisKwargs
pydantic-model
¤
Bases: ProcessingKwargs
Key word arguments for ScaleLinearDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "Key word arguments for `ScaleLinearDescr`",
"properties": {
"axis": {
"description": "The axis of gain and offset values.",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"gain": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 0.0,
"description": "additive term",
"title": "Offset"
}
},
"required": [
"axis"
],
"title": "model.v0_5.ScaleLinearAlongAxisKwargs",
"type": "object"
}
Fields:
-
axis(NonBatchAxisId) -
gain(Union[float, NotEmpty[List[float]]]) -
offset(Union[float, NotEmpty[List[float]]])
Validators:
-
_validate
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
425 426 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
419 420 421 422 423 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
416 417 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
ScaleLinearDescr
pydantic-model
¤
Bases: ProcessingDescrBase
Fixed linear scaling.
Examples:
- Scale with scalar gain and offset
- in YAML
preprocessing: - id: scale_linear kwargs: gain: 2.0 offset: 3.0 -
in Python:
preprocessing = [ ... ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0)) ... ]
-
Independent scaling along an axis
- in YAML
preprocessing: - id: scale_linear kwargs: axis: 'channel' gain: [1.0, 2.0, 3.0] - in Python:
preprocessing = [ ... ScaleLinearDescr( ... kwargs=ScaleLinearAlongAxisKwargs( ... axis=AxisId("channel"), ... gain=[1.0, 2.0, 3.0], ... ) ... ) ... ]
Show JSON schema:
{
"$defs": {
"ScaleLinearAlongAxisKwargs": {
"additionalProperties": false,
"description": "Key word arguments for `ScaleLinearDescr`",
"properties": {
"axis": {
"description": "The axis of gain and offset values.",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"gain": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
}
],
"default": 0.0,
"description": "additive term",
"title": "Offset"
}
},
"required": [
"axis"
],
"title": "model.v0_5.ScaleLinearAlongAxisKwargs",
"type": "object"
},
"ScaleLinearKwargs": {
"additionalProperties": false,
"description": "Key word arguments for `ScaleLinearDescr`",
"properties": {
"gain": {
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain",
"type": "number"
},
"offset": {
"default": 0.0,
"description": "additive term",
"title": "Offset",
"type": "number"
}
},
"title": "model.v0_5.ScaleLinearKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Fixed linear scaling.\n\nExamples:\n 1. Scale with scalar gain and offset\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n gain: 2.0\n offset: 3.0\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(kwargs=ScaleLinearKwargs(gain= 2.0, offset=3.0))\n ... ]\n\n 2. Independent scaling along an axis\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_linear\n kwargs:\n axis: 'channel'\n gain: [1.0, 2.0, 3.0]\n ```\n - in Python:\n >>> preprocessing = [\n ... ScaleLinearDescr(\n ... kwargs=ScaleLinearAlongAxisKwargs(\n ... axis=AxisId(\"channel\"),\n ... gain=[1.0, 2.0, 3.0],\n ... )\n ... )\n ... ]",
"properties": {
"id": {
"const": "scale_linear",
"title": "Id",
"type": "string"
},
"kwargs": {
"anyOf": [
{
"$ref": "#/$defs/ScaleLinearKwargs"
},
{
"$ref": "#/$defs/ScaleLinearAlongAxisKwargs"
}
],
"title": "Kwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ScaleLinearDescr",
"type": "object"
}
Fields:
-
id(Literal['scale_linear']) -
kwargs(Union[ScaleLinearKwargs, ScaleLinearAlongAxisKwargs])
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
ScaleLinearKwargs
pydantic-model
¤
Bases: ProcessingKwargs
Key word arguments for ScaleLinearDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "Key word arguments for `ScaleLinearDescr`",
"properties": {
"gain": {
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain",
"type": "number"
},
"offset": {
"default": 0.0,
"description": "additive term",
"title": "Offset",
"type": "number"
}
},
"title": "model.v0_5.ScaleLinearKwargs",
"type": "object"
}
Fields:
Validators:
-
_validate
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
425 426 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
419 420 421 422 423 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
416 417 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
ScaleMeanVarianceDescr
pydantic-model
¤
Bases: ProcessingDescrBase
Scale a tensor's data distribution to match another tensor's mean/std.
out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.
Show JSON schema:
{
"$defs": {
"ScaleMeanVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleMeanVarianceKwargs`",
"properties": {
"reference_tensor": {
"description": "Name of tensor to match.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability:\n`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"reference_tensor"
],
"title": "model.v0_5.ScaleMeanVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Scale a tensor's data distribution to match another tensor's mean/std.\n`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
"properties": {
"id": {
"const": "scale_mean_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleMeanVarianceKwargs"
}
},
"required": [
"id",
"kwargs"
],
"title": "model.v0_5.ScaleMeanVarianceDescr",
"type": "object"
}
Fields:
-
id(Literal['scale_mean_variance']) -
kwargs(ScaleMeanVarianceKwargs)
implemented_id
class-attribute
¤
implemented_id: Literal["scale_mean_variance"] = (
"scale_mean_variance"
)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
ScaleMeanVarianceKwargs
pydantic-model
¤
Bases: ProcessingKwargs
key word arguments for ScaleMeanVarianceKwargs
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for `ScaleMeanVarianceKwargs`",
"properties": {
"reference_tensor": {
"description": "Name of tensor to match.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability:\n`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.`",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"reference_tensor"
],
"title": "model.v0_5.ScaleMeanVarianceKwargs",
"type": "object"
}
Fields:
-
reference_tensor(TensorId) -
axes(Optional[Sequence[AxisId]]) -
eps(float)
axes
pydantic-field
¤
axes: Optional[Sequence[AxisId]] = None
The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.
For example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')
resulting in a tensor of equal shape normalized per channel, specify axes=('batch', 'x', 'y').
To normalize samples independently, leave out the 'batch' axis.
Default: Scale all axes jointly.
eps
pydantic-field
¤
eps: float = 1e-06
Epsilon for numeric stability:
out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
425 426 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
419 420 421 422 423 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
416 417 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
ScaleRangeDescr
pydantic-model
¤
Bases: ProcessingDescrBase
Scale with percentiles.
Examples:
-
Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0
- in YAML
preprocessing: - id: scale_range kwargs: axes: ['y', 'x'] max_percentile: 99.8 min_percentile: 5.0 - in Python
preprocessing = [ ... ScaleRangeDescr( ... kwargs=ScaleRangeKwargs( ... axes= (AxisId('y'), AxisId('x')), ... max_percentile= 99.8, ... min_percentile= 5.0, ... ) ... ) ... ]
- in YAML
-
Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.
- in YAML
preprocessing: - id: scale_range kwargs: axes: ['y', 'x'] max_percentile: 99.8 min_percentile: 5.0 - id: scale_range - id: clip kwargs: min: 0.0 max: 1.0 - in Python
preprocessing = [ ... ScaleRangeDescr( ... kwargs=ScaleRangeKwargs( ... axes= (AxisId('y'), AxisId('x')), ... max_percentile= 99.8, ... min_percentile= 5.0, ... ) ... ), ... ClipDescr( ... kwargs=ClipKwargs( ... min=0.0, ... max=1.0, ... ) ... ), ... ]
- in YAML
Show JSON schema:
{
"$defs": {
"ScaleRangeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleRangeDescr`\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"min_percentile": {
"default": 0.0,
"description": "The lower percentile used to determine the value to align with zero.",
"exclusiveMaximum": 100,
"minimum": 0,
"title": "Min Percentile",
"type": "number"
},
"max_percentile": {
"default": 100.0,
"description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
"exclusiveMinimum": 1,
"maximum": 100,
"title": "Max Percentile",
"type": "number"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
},
"reference_tensor": {
"anyOf": [
{
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Tensor ID to compute the percentiles from. Default: The tensor itself.\nFor any tensor in `inputs` only input tensor references are allowed.",
"title": "Reference Tensor"
}
},
"title": "model.v0_5.ScaleRangeKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Scale with percentiles.\n\nExamples:\n1. Scale linearly to map 5th percentile to 0 and 99.8th percentile to 1.0\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... )\n ... ]\n\n 2. Combine the above scaling with additional clipping to clip values outside the range given by the percentiles.\n - in YAML\n ```yaml\n preprocessing:\n - id: scale_range\n kwargs:\n axes: ['y', 'x']\n max_percentile: 99.8\n min_percentile: 5.0\n - id: scale_range\n - id: clip\n kwargs:\n min: 0.0\n max: 1.0\n ```\n - in Python\n >>> preprocessing = [\n ... ScaleRangeDescr(\n ... kwargs=ScaleRangeKwargs(\n ... axes= (AxisId('y'), AxisId('x')),\n ... max_percentile= 99.8,\n ... min_percentile= 5.0,\n ... )\n ... ),\n ... ClipDescr(\n ... kwargs=ClipKwargs(\n ... min=0.0,\n ... max=1.0,\n ... )\n ... ),\n ... ]",
"properties": {
"id": {
"const": "scale_range",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleRangeKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ScaleRangeDescr",
"type": "object"
}
Fields:
-
id(Literal['scale_range']) -
kwargs(ScaleRangeKwargs)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
ScaleRangeKwargs
pydantic-model
¤
Bases: ProcessingKwargs
key word arguments for ScaleRangeDescr
For min_percentile=0.0 (the default) and max_percentile=100 (the default)
this processing step normalizes data to the [0, 1] intervall.
For other percentiles the normalized values will partially be outside the [0, 1]
intervall. Use ScaleRange followed by ClipDescr if you want to limit the
normalized values to a range.
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for `ScaleRangeDescr`\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize samples independently, leave out the \"batch\" axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"min_percentile": {
"default": 0.0,
"description": "The lower percentile used to determine the value to align with zero.",
"exclusiveMaximum": 100,
"minimum": 0,
"title": "Min Percentile",
"type": "number"
},
"max_percentile": {
"default": 100.0,
"description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
"exclusiveMinimum": 1,
"maximum": 100,
"title": "Max Percentile",
"type": "number"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
},
"reference_tensor": {
"anyOf": [
{
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Tensor ID to compute the percentiles from. Default: The tensor itself.\nFor any tensor in `inputs` only input tensor references are allowed.",
"title": "Reference Tensor"
}
},
"title": "model.v0_5.ScaleRangeKwargs",
"type": "object"
}
Fields:
-
axes(Optional[Sequence[AxisId]]) -
min_percentile(float) -
max_percentile(float) -
eps(float) -
reference_tensor(Optional[TensorId])
Validators:
axes
pydantic-field
¤
axes: Optional[Sequence[AxisId]] = None
The subset of axes to normalize jointly, i.e. axes to reduce to compute the min/max percentile value.
For example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')
resulting in a tensor of equal shape normalized per channel, specify axes=('batch', 'x', 'y').
To normalize samples independently, leave out the "batch" axis.
Default: Scale all axes jointly.
eps
pydantic-field
¤
eps: float = 1e-06
Epsilon for numeric stability.
out = (tensor - v_lower) / (v_upper - v_lower + eps);
with v_lower,v_upper values at the respective percentiles.
max_percentile
pydantic-field
¤
max_percentile: float = 100.0
The upper percentile used to determine the value to align with one.
Has to be bigger than min_percentile.
The range is 1 to 100 instead of 0 to 100 to avoid mistakenly
accepting percentiles specified in the range 0.0 to 1.0.
min_percentile
pydantic-field
¤
min_percentile: float = 0.0
The lower percentile used to determine the value to align with zero.
reference_tensor
pydantic-field
¤
reference_tensor: Optional[TensorId] = None
Tensor ID to compute the percentiles from. Default: The tensor itself.
For any tensor in inputs only input tensor references are allowed.
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
425 426 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
419 420 421 422 423 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
416 417 | |
min_smaller_max
pydantic-validator
¤
min_smaller_max(
value: float, info: ValidationInfo
) -> float
Source code in src/bioimageio/spec/model/v0_5.py
1390 1391 1392 1393 1394 1395 1396 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
Sha256
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.Sha256[Sha256]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.Sha256
click bioimageio.spec.model.v0_5.Sha256 href "" "bioimageio.spec.model.v0_5.Sha256"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
A SHA-256 hash value
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
root_model |
TYPE:
|
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
SiUnit
¤
Bases: ValidatedString
flowchart TD
bioimageio.spec.model.v0_5.SiUnit[SiUnit]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec.model.v0_5.SiUnit
click bioimageio.spec.model.v0_5.SiUnit href "" "bioimageio.spec.model.v0_5.SiUnit"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
An SI unit
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
root_model |
TYPE:
|
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
SigmoidDescr
pydantic-model
¤
Bases: ProcessingDescrBase
The logistic sigmoid function, a.k.a. expit function.
Examples:
- in YAML
postprocessing: - id: sigmoid - in Python: >>> postprocessing = [SigmoidDescr()]
Show JSON schema:
{
"additionalProperties": false,
"description": "The logistic sigmoid function, a.k.a. expit function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: sigmoid\n ```\n- in Python:\n >>> postprocessing = [SigmoidDescr()]",
"properties": {
"id": {
"const": "sigmoid",
"title": "Id",
"type": "string"
}
},
"required": [
"id"
],
"title": "model.v0_5.SigmoidDescr",
"type": "object"
}
Fields:
-
id(Literal['sigmoid'])
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
SizeReference
pydantic-model
¤
Bases: Node
A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.
axis.size = reference.size * reference.scale / axis.scale + offset
Note:
1. The axis and the referenced axis need to have the same unit (or no unit).
2. Batch axes may not be referenced.
3. Fractions are rounded down.
4. If the reference axis is concatenable the referencing axis is assumed to be
concatenable as well with the same block order.
Example:
An unisotropic input image of wh=10049 pixels depicts a phsical space of 200196mm².
Let's assume that we want to express the image height h in relation to its width w
instead of only accepting input images of exactly 10049 pixels
(for example to express a range of valid image shapes by parametrizing w, see ParameterizedSize).
w = SpaceInputAxis(id=AxisId("w"), size=100, unit="millimeter", scale=2) h = SpaceInputAxis( ... id=AxisId("h"), ... size=SizeReference(tensor_id=TensorId("input"), axis_id=AxisId("w"), offset=-1), ... unit="millimeter", ... scale=4, ... ) print(h.size.get_size(h, w)) 49
⇒ h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49
Show JSON schema:
{
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
Fields:
get_size
¤
get_size(
axis: Union[
ChannelAxis,
IndexInputAxis,
IndexOutputAxis,
TimeInputAxis,
SpaceInputAxis,
TimeOutputAxis,
TimeOutputAxisWithHalo,
SpaceOutputAxis,
SpaceOutputAxisWithHalo,
],
ref_axis: Union[
ChannelAxis,
IndexInputAxis,
IndexOutputAxis,
TimeInputAxis,
SpaceInputAxis,
TimeOutputAxis,
TimeOutputAxisWithHalo,
SpaceOutputAxis,
SpaceOutputAxisWithHalo,
],
n: ParameterizedSize_N = 0,
ref_size: Optional[int] = None,
)
Compute the concrete size for a given axis and its reference axis.
| PARAMETER | DESCRIPTION |
|---|---|
|
The axis this
TYPE:
|
|
The reference axis to compute the size from.
TYPE:
|
|
If the ref_axis is parameterized (of type
TYPE:
|
|
Overwrite the reference size instead of deriving it from ref_axis (ref_axis.scale is still used; any given n is ignored).
TYPE:
|
Source code in src/bioimageio/spec/model/v0_5.py
393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
SoftmaxDescr
pydantic-model
¤
Bases: ProcessingDescrBase
The softmax function.
Examples:
- in YAML
postprocessing: - id: softmax kwargs: axis: channel - in Python: >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId("channel")))]
Show JSON schema:
{
"$defs": {
"SoftmaxKwargs": {
"additionalProperties": false,
"description": "key word arguments for `SoftmaxDescr`",
"properties": {
"axis": {
"default": "channel",
"description": "The axis to apply the softmax function along.\nNote:\n Defaults to 'channel' axis\n (which may not exist, in which case\n a different axis id has to be specified).",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"title": "model.v0_5.SoftmaxKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "The softmax function.\n\nExamples:\n- in YAML\n ```yaml\n postprocessing:\n - id: softmax\n kwargs:\n axis: channel\n ```\n- in Python:\n >>> postprocessing = [SoftmaxDescr(kwargs=SoftmaxKwargs(axis=AxisId(\"channel\")))]",
"properties": {
"id": {
"const": "softmax",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/SoftmaxKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.SoftmaxDescr",
"type": "object"
}
Fields:
-
id(Literal['softmax']) -
kwargs(SoftmaxKwargs)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
SoftmaxKwargs
pydantic-model
¤
Bases: ProcessingKwargs
key word arguments for SoftmaxDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for `SoftmaxDescr`",
"properties": {
"axis": {
"default": "channel",
"description": "The axis to apply the softmax function along.\nNote:\n Defaults to 'channel' axis\n (which may not exist, in which case\n a different axis id has to be specified).",
"examples": [
"channel"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
}
},
"title": "model.v0_5.SoftmaxKwargs",
"type": "object"
}
Fields:
axis
pydantic-field
¤
axis: NonBatchAxisId
The axis to apply the softmax function along. Note: Defaults to 'channel' axis (which may not exist, in which case a different axis id has to be specified).
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
425 426 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
419 420 421 422 423 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
416 417 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
SpaceAxisBase
pydantic-model
¤
Bases: AxisBase
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"type"
],
"title": "model.v0_5.SpaceAxisBase",
"type": "object"
}
Fields:
-
description(str) -
type(Literal['space']) -
id(NonBatchAxisId) -
unit(Optional[SpaceUnit]) -
scale(float)
description
pydantic-field
¤
description: str = ''
A short description of this axis beyond its type and id.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
SpaceInputAxis
pydantic-model
¤
Bases: SpaceAxisBase, _WithInputAxisSize
Show JSON schema:
{
"$defs": {
"ParameterizedSize": {
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes (`ParameterizedSize`)\n- reference to another axis with an optional offset (`SizeReference`)",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a `SizeReference` to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceInputAxis",
"type": "object"
}
Fields:
-
size(Union[int, ParameterizedSize, SizeReference]) -
id(NonBatchAxisId) -
description(str) -
type(Literal['space']) -
unit(Optional[SpaceUnit]) -
scale(float) -
concatenable(bool)
concatenable
pydantic-field
¤
concatenable: bool = False
If a model has a concatenable input axis, it can be processed blockwise,
splitting a longer sample axis into blocks matching its input tensor description.
Output axes are concatenable if they have a SizeReference to a concatenable
input axis.
description
pydantic-field
¤
description: str = ''
A short description of this axis beyond its type and id.
size
pydantic-field
¤
size: Union[int, ParameterizedSize, SizeReference]
The size/length of this axis can be specified as
- fixed integer
- parameterized series of valid sizes (ParameterizedSize)
- reference to another axis with an optional offset (SizeReference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
SpaceOutputAxis
pydantic-model
¤
Bases: SpaceAxisBase, _WithOutputAxisSize
Show JSON schema:
{
"$defs": {
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxis",
"type": "object"
}
Fields:
-
size(Union[int, SizeReference]) -
id(NonBatchAxisId) -
description(str) -
type(Literal['space']) -
unit(Optional[SpaceUnit]) -
scale(float)
description
pydantic-field
¤
description: str = ''
A short description of this axis beyond its type and id.
size
pydantic-field
¤
size: Union[int, SizeReference]
The size/length of this axis can be specified as
- fixed integer
- reference to another axis with an optional offset (see SizeReference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
SpaceOutputAxisWithHalo
pydantic-model
¤
Bases: SpaceAxisBase, WithHalo
Show JSON schema:
{
"$defs": {
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxisWithHalo",
"type": "object"
}
Fields:
-
halo(int) -
size(SizeReference) -
id(NonBatchAxisId) -
description(str) -
type(Literal['space']) -
unit(Optional[SpaceUnit]) -
scale(float)
description
pydantic-field
¤
description: str = ''
A short description of this axis beyond its type and id.
halo
pydantic-field
¤
halo: int
The halo should be cropped from the output tensor to avoid boundary effects.
It is to be cropped from both sides, i.e. size_after_crop = size - 2 * halo.
To document a halo that is already cropped by the model use size.offset instead.
size
pydantic-field
¤
size: SizeReference
reference to another axis with an optional offset (see SizeReference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
TensorDescrBase
pydantic-model
¤
Bases: Node, Generic[IO_AxisT]
Show JSON schema:
{
"$defs": {
"BatchAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "batch",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "batch",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"const": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "The batch size may be fixed to 1,\notherwise (the default) it may be chosen arbitrarily depending on available memory",
"title": "Size"
}
},
"required": [
"type"
],
"title": "model.v0_5.BatchAxis",
"type": "object"
},
"ChannelAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "channel",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "channel",
"title": "Type",
"type": "string"
},
"channel_names": {
"items": {
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"minItems": 1,
"title": "Channel Names",
"type": "array"
}
},
"required": [
"type",
"channel_names"
],
"title": "model.v0_5.ChannelAxis",
"type": "object"
},
"DataDependentSize": {
"additionalProperties": false,
"properties": {
"min": {
"default": 1,
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"max": {
"anyOf": [
{
"exclusiveMinimum": 1,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Max"
}
},
"title": "model.v0_5.DataDependentSize",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"IndexInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes (`ParameterizedSize`)\n- reference to another axis with an optional offset (`SizeReference`)",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a `SizeReference` to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.IndexInputAxis",
"type": "object"
},
"IndexOutputAxis": {
"additionalProperties": false,
"properties": {
"id": {
"default": "index",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "index",
"title": "Type",
"type": "string"
},
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
},
{
"$ref": "#/$defs/DataDependentSize"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (`SizeReference`)\n- data dependent size using `DataDependentSize` (size is only known after model inference)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
}
},
"required": [
"type",
"size"
],
"title": "model.v0_5.IndexOutputAxis",
"type": "object"
},
"IntervalOrRatioDataDescr": {
"additionalProperties": false,
"properties": {
"type": {
"default": "float32",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64"
],
"examples": [
"float32",
"float64",
"uint8",
"uint16"
],
"title": "Type",
"type": "string"
},
"range": {
"default": [
null,
null
],
"description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\n`None` corresponds to min/max of what can be expressed by **type**.",
"maxItems": 2,
"minItems": 2,
"prefixItems": [
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
},
{
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
}
],
"title": "Range",
"type": "array"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
}
],
"default": "arbitrary unit",
"title": "Unit"
},
"scale": {
"default": 1.0,
"description": "Scale for data on an interval (or ratio) scale.",
"title": "Scale",
"type": "number"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Offset for data on a ratio scale.",
"title": "Offset"
}
},
"title": "model.v0_5.IntervalOrRatioDataDescr",
"type": "object"
},
"NominalOrOrdinalDataDescr": {
"additionalProperties": false,
"properties": {
"values": {
"anyOf": [
{
"items": {
"type": "integer"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "boolean"
},
"minItems": 1,
"type": "array"
},
{
"items": {
"type": "string"
},
"minItems": 1,
"type": "array"
}
],
"description": "A fixed set of nominal or an ascending sequence of ordinal values.\nIn this case `data.type` is required to be an unsigend integer type, e.g. 'uint8'.\nString `values` are interpreted as labels for tensor values 0, ..., N.\nNote: as YAML 1.2 does not natively support a \"set\" datatype,\nnominal values should be given as a sequence (aka list/array) as well.",
"title": "Values"
},
"type": {
"default": "uint8",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"examples": [
"float32",
"uint8",
"uint16",
"int64",
"bool"
],
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"const": "arbitrary unit",
"type": "string"
},
{
"description": "An SI unit",
"minLength": 1,
"pattern": "^(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?((\u00b7(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^[+-]?[1-9]\\d*)?)|(/(Q|R|Y|Z|E|P|T|G|M|k|h|da|d|c|m|\u00b5|n|p|f|a|z|y|r|q)?(m|g|s|A|K|mol|cd|Hz|N|Pa|J|W|C|V|F|\u03a9|S|Wb|T|H|lm|lx|Bq|Gy|Sv|kat|l|L)(\\^+?[1-9]\\d*)?))*$",
"title": "SiUnit",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
}
},
"required": [
"values"
],
"title": "model.v0_5.NominalOrOrdinalDataDescr",
"type": "object"
},
"ParameterizedSize": {
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
},
"SpaceInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes (`ParameterizedSize`)\n- reference to another axis with an optional offset (`SizeReference`)",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a `SizeReference` to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceInputAxis",
"type": "object"
},
"SpaceOutputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxis",
"type": "object"
},
"SpaceOutputAxisWithHalo": {
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "x",
"examples": [
"x",
"y",
"z"
],
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "space",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attometer",
"angstrom",
"centimeter",
"decimeter",
"exameter",
"femtometer",
"foot",
"gigameter",
"hectometer",
"inch",
"kilometer",
"megameter",
"meter",
"micrometer",
"mile",
"millimeter",
"nanometer",
"parsec",
"petameter",
"picometer",
"terameter",
"yard",
"yoctometer",
"yottameter",
"zeptometer",
"zettameter"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.SpaceOutputAxisWithHalo",
"type": "object"
},
"TimeInputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes (`ParameterizedSize`)\n- reference to another axis with an optional offset (`SizeReference`)",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a `SizeReference` to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeInputAxis",
"type": "object"
},
"TimeOutputAxis": {
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxis",
"type": "object"
},
"TimeOutputAxisWithHalo": {
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxisWithHalo",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"id": {
"description": "Tensor id. No duplicates are allowed.",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"description": {
"default": "",
"description": "free text description",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"axes": {
"description": "tensor axes",
"items": {
"anyOf": [
{
"discriminator": {
"mapping": {
"batch": "#/$defs/BatchAxis",
"channel": "#/$defs/ChannelAxis",
"index": "#/$defs/IndexInputAxis",
"space": "#/$defs/SpaceInputAxis",
"time": "#/$defs/TimeInputAxis"
},
"propertyName": "type"
},
"oneOf": [
{
"$ref": "#/$defs/BatchAxis"
},
{
"$ref": "#/$defs/ChannelAxis"
},
{
"$ref": "#/$defs/IndexInputAxis"
},
{
"$ref": "#/$defs/TimeInputAxis"
},
{
"$ref": "#/$defs/SpaceInputAxis"
}
]
},
{
"discriminator": {
"mapping": {
"batch": "#/$defs/BatchAxis",
"channel": "#/$defs/ChannelAxis",
"index": "#/$defs/IndexOutputAxis",
"space": {
"oneOf": [
{
"$ref": "#/$defs/SpaceOutputAxis"
},
{
"$ref": "#/$defs/SpaceOutputAxisWithHalo"
}
]
},
"time": {
"oneOf": [
{
"$ref": "#/$defs/TimeOutputAxis"
},
{
"$ref": "#/$defs/TimeOutputAxisWithHalo"
}
]
}
},
"propertyName": "type"
},
"oneOf": [
{
"$ref": "#/$defs/BatchAxis"
},
{
"$ref": "#/$defs/ChannelAxis"
},
{
"$ref": "#/$defs/IndexOutputAxis"
},
{
"oneOf": [
{
"$ref": "#/$defs/TimeOutputAxis"
},
{
"$ref": "#/$defs/TimeOutputAxisWithHalo"
}
]
},
{
"oneOf": [
{
"$ref": "#/$defs/SpaceOutputAxis"
},
{
"$ref": "#/$defs/SpaceOutputAxisWithHalo"
}
]
}
]
}
]
},
"minItems": 1,
"title": "Axes",
"type": "array"
},
"test_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "An example tensor to use for testing.\nUsing the model with the test input tensors is expected to yield the test output tensors.\nEach test tensor has be a an ndarray in the\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe file extension must be '.npy'."
},
"sample_tensor": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "A sample tensor to illustrate a possible input/output for the model,\nThe sample image primarily serves to inform a human user about an example use case\nand is typically stored as .hdf5, .png or .tiff.\nIt has to be readable by the [imageio library](https://imageio.readthedocs.io/en/stable/formats/index.html#supported-formats)\n(numpy's `.npy` format is not supported).\nThe image dimensionality has to match the number of axes specified in this tensor description."
},
"data": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
},
{
"items": {
"anyOf": [
{
"$ref": "#/$defs/NominalOrOrdinalDataDescr"
},
{
"$ref": "#/$defs/IntervalOrRatioDataDescr"
}
]
},
"minItems": 1,
"type": "array"
}
],
"default": {
"type": "float32",
"range": [
null,
null
],
"unit": "arbitrary unit",
"scale": 1.0,
"offset": null
},
"description": "Description of the tensor's data values, optionally per channel.\nIf specified per channel, the data `type` needs to match across channels.",
"title": "Data"
}
},
"required": [
"id",
"axes"
],
"title": "model.v0_5.TensorDescrBase",
"type": "object"
}
Fields:
-
id(TensorId) -
description(str) -
axes(NotEmpty[Sequence[IO_AxisT]]) -
test_tensor(FAIR[Optional[FileDescr_]]) -
sample_tensor(FAIR[Optional[FileDescr_]]) -
data(Union[TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]])
Validators:
-
_validate_axes→axes -
_validate_sample_tensor -
_check_data_type_across_channels→data -
_check_data_matches_channelaxis
data
pydantic-field
¤
data: Union[
TensorDataDescr, NotEmpty[Sequence[TensorDataDescr]]
]
Description of the tensor's data values, optionally per channel.
If specified per channel, the data type needs to match across channels.
dtype
property
¤
dtype: Literal[
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool",
]
dtype as specified under data.type or data[i].type
sample_tensor
pydantic-field
¤
sample_tensor: FAIR[Optional[FileDescr_]] = None
A sample tensor to illustrate a possible input/output for the model,
The sample image primarily serves to inform a human user about an example use case
and is typically stored as .hdf5, .png or .tiff.
It has to be readable by the imageio library
(numpy's .npy format is not supported).
The image dimensionality has to match the number of axes specified in this tensor description.
test_tensor
pydantic-field
¤
test_tensor: FAIR[Optional[FileDescr_]] = None
An example tensor to use for testing. Using the model with the test input tensors is expected to yield the test output tensors. Each test tensor has be a an ndarray in the numpy.lib file format. The file extension must be '.npy'.
get_axis_sizes_for_array
¤
get_axis_sizes_for_array(
array: NDArray[Any],
) -> Dict[AxisId, int]
Source code in src/bioimageio/spec/model/v0_5.py
1685 1686 1687 1688 1689 1690 1691 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
TensorId
¤
Bases: LowerCaseIdentifier
flowchart TD
bioimageio.spec.model.v0_5.TensorId[TensorId]
bioimageio.spec._internal.types.LowerCaseIdentifier[LowerCaseIdentifier]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.types.LowerCaseIdentifier --> bioimageio.spec.model.v0_5.TensorId
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.types.LowerCaseIdentifier
click bioimageio.spec.model.v0_5.TensorId href "" "bioimageio.spec.model.v0_5.TensorId"
click bioimageio.spec._internal.types.LowerCaseIdentifier href "" "bioimageio.spec._internal.types.LowerCaseIdentifier"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
root_model |
TYPE:
|
root_model
class-attribute
¤
root_model: Type[RootModel[Any]] = RootModel[
LowerCaseIdentifierAnno
]
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
TensorflowJsWeightsDescr
pydantic-model
¤
Bases: WeightsEntryDescrBase
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "Version of the TensorFlow library used."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.TensorflowJsWeightsDescr",
"type": "object"
}
Fields:
-
sha256(Optional[Sha256]) -
authors(Optional[List[Author]]) -
parent(Optional[WeightsFormat]) -
comment(str) -
tensorflow_version(Version) -
source(FileSource)
Validators:
-
_validate
authors
pydantic-field
¤
authors: Optional[List[Author]] = None
Authors
Either the person(s) that have trained this model resulting in the original weights file.
(If this is the initial weights entry, i.e. it does not have a parent)
Or the person(s) who have converted the weights to this weights format.
(If this is a child weight, i.e. it has a parent field)
comment
pydantic-field
¤
comment: str = ''
A comment about this weights entry, for example how these weights were created.
parent
pydantic-field
¤
parent: Optional[WeightsFormat] = None
The source weights these weights were converted from.
For example, if a model's weights were converted from the pytorch_state_dict format to torchscript,
The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights.
All weight entries except one (the initial set of weights resulting from training the model),
need to have this field.
source
pydantic-field
¤
source: FileSource
The multi-file weights. All required files/folders should be a zip archive.
tensorflow_version
pydantic-field
¤
tensorflow_version: Version
Version of the TensorFlow library used.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
306 307 308 309 310 311 312 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
298 299 300 301 302 303 304 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 | |
TensorflowSavedModelBundleWeightsDescr
pydantic-model
¤
Bases: WeightsEntryDescrBase
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "Version of the TensorFlow library used."
},
"dependencies": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "environment.yaml"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Custom dependencies beyond tensorflow.\nShould include tensorflow and any version pinning has to be compatible with **tensorflow_version**."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.TensorflowSavedModelBundleWeightsDescr",
"type": "object"
}
Fields:
-
sha256(Optional[Sha256]) -
authors(Optional[List[Author]]) -
parent(Optional[WeightsFormat]) -
comment(str) -
tensorflow_version(Version) -
dependencies(Optional[FileDescr_dependencies]) -
source(FileSource)
Validators:
-
_validate
authors
pydantic-field
¤
authors: Optional[List[Author]] = None
Authors
Either the person(s) that have trained this model resulting in the original weights file.
(If this is the initial weights entry, i.e. it does not have a parent)
Or the person(s) who have converted the weights to this weights format.
(If this is a child weight, i.e. it has a parent field)
comment
pydantic-field
¤
comment: str = ''
A comment about this weights entry, for example how these weights were created.
dependencies
pydantic-field
¤
dependencies: Optional[FileDescr_dependencies] = None
Custom dependencies beyond tensorflow. Should include tensorflow and any version pinning has to be compatible with tensorflow_version.
parent
pydantic-field
¤
parent: Optional[WeightsFormat] = None
The source weights these weights were converted from.
For example, if a model's weights were converted from the pytorch_state_dict format to torchscript,
The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights.
All weight entries except one (the initial set of weights resulting from training the model),
need to have this field.
source
pydantic-field
¤
source: FileSource
The multi-file weights. All required files/folders should be a zip archive.
tensorflow_version
pydantic-field
¤
tensorflow_version: Version
Version of the TensorFlow library used.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
306 307 308 309 310 311 312 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
298 299 300 301 302 303 304 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 | |
TimeAxisBase
pydantic-model
¤
Bases: AxisBase
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"type"
],
"title": "model.v0_5.TimeAxisBase",
"type": "object"
}
Fields:
-
description(str) -
type(Literal['time']) -
id(NonBatchAxisId) -
unit(Optional[TimeUnit]) -
scale(float)
description
pydantic-field
¤
description: str = ''
A short description of this axis beyond its type and id.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
TimeInputAxis
pydantic-model
¤
Bases: TimeAxisBase, _WithInputAxisSize
Show JSON schema:
{
"$defs": {
"ParameterizedSize": {
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes (`ParameterizedSize`)\n- reference to another axis with an optional offset (`SizeReference`)",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
},
"concatenable": {
"default": false,
"description": "If a model has a `concatenable` input axis, it can be processed blockwise,\nsplitting a longer sample axis into blocks matching its input tensor description.\nOutput axes are concatenable if they have a `SizeReference` to a concatenable\ninput axis.",
"title": "Concatenable",
"type": "boolean"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeInputAxis",
"type": "object"
}
Fields:
-
size(Union[int, ParameterizedSize, SizeReference]) -
id(NonBatchAxisId) -
description(str) -
type(Literal['time']) -
unit(Optional[TimeUnit]) -
scale(float) -
concatenable(bool)
concatenable
pydantic-field
¤
concatenable: bool = False
If a model has a concatenable input axis, it can be processed blockwise,
splitting a longer sample axis into blocks matching its input tensor description.
Output axes are concatenable if they have a SizeReference to a concatenable
input axis.
description
pydantic-field
¤
description: str = ''
A short description of this axis beyond its type and id.
size
pydantic-field
¤
size: Union[int, ParameterizedSize, SizeReference]
The size/length of this axis can be specified as
- fixed integer
- parameterized series of valid sizes (ParameterizedSize)
- reference to another axis with an optional offset (SizeReference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
TimeOutputAxis
pydantic-model
¤
Bases: TimeAxisBase, _WithOutputAxisSize
Show JSON schema:
{
"$defs": {
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxis",
"type": "object"
}
Fields:
-
size(Union[int, SizeReference]) -
id(NonBatchAxisId) -
description(str) -
type(Literal['time']) -
unit(Optional[TimeUnit]) -
scale(float)
description
pydantic-field
¤
description: str = ''
A short description of this axis beyond its type and id.
size
pydantic-field
¤
size: Union[int, SizeReference]
The size/length of this axis can be specified as
- fixed integer
- reference to another axis with an optional offset (see SizeReference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
TimeOutputAxisWithHalo
pydantic-model
¤
Bases: TimeAxisBase, WithHalo
Show JSON schema:
{
"$defs": {
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
},
"id": {
"default": "time",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"description": {
"default": "",
"description": "A short description of this axis beyond its type and id.",
"maxLength": 128,
"title": "Description",
"type": "string"
},
"type": {
"const": "time",
"title": "Type",
"type": "string"
},
"unit": {
"anyOf": [
{
"enum": [
"attosecond",
"centisecond",
"day",
"decisecond",
"exasecond",
"femtosecond",
"gigasecond",
"hectosecond",
"hour",
"kilosecond",
"megasecond",
"microsecond",
"millisecond",
"minute",
"nanosecond",
"petasecond",
"picosecond",
"second",
"terasecond",
"yoctosecond",
"yottasecond",
"zeptosecond",
"zettasecond"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Unit"
},
"scale": {
"default": 1.0,
"exclusiveMinimum": 0,
"title": "Scale",
"type": "number"
}
},
"required": [
"halo",
"size",
"type"
],
"title": "model.v0_5.TimeOutputAxisWithHalo",
"type": "object"
}
Fields:
-
halo(int) -
size(SizeReference) -
id(NonBatchAxisId) -
description(str) -
type(Literal['time']) -
unit(Optional[TimeUnit]) -
scale(float)
description
pydantic-field
¤
description: str = ''
A short description of this axis beyond its type and id.
halo
pydantic-field
¤
halo: int
The halo should be cropped from the output tensor to avoid boundary effects.
It is to be cropped from both sides, i.e. size_after_crop = size - 2 * halo.
To document a halo that is already cropped by the model use size.offset instead.
size
pydantic-field
¤
size: SizeReference
reference to another axis with an optional offset (see SizeReference)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
TorchscriptWeightsDescr
pydantic-model
¤
Bases: WeightsEntryDescrBase
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"pytorch_version": {
"$ref": "#/$defs/Version",
"description": "Version of the PyTorch library used."
}
},
"required": [
"source",
"pytorch_version"
],
"title": "model.v0_5.TorchscriptWeightsDescr",
"type": "object"
}
Fields:
-
source(FileSource) -
sha256(Optional[Sha256]) -
authors(Optional[List[Author]]) -
parent(Optional[WeightsFormat]) -
comment(str) -
pytorch_version(Version)
Validators:
-
_validate
authors
pydantic-field
¤
authors: Optional[List[Author]] = None
Authors
Either the person(s) that have trained this model resulting in the original weights file.
(If this is the initial weights entry, i.e. it does not have a parent)
Or the person(s) who have converted the weights to this weights format.
(If this is a child weight, i.e. it has a parent field)
comment
pydantic-field
¤
comment: str = ''
A comment about this weights entry, for example how these weights were created.
parent
pydantic-field
¤
parent: Optional[WeightsFormat] = None
The source weights these weights were converted from.
For example, if a model's weights were converted from the pytorch_state_dict format to torchscript,
The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights.
All weight entries except one (the initial set of weights resulting from training the model),
need to have this field.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
306 307 308 309 310 311 312 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
298 299 300 301 302 303 304 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 | |
Uploader
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"email": {
"description": "Email",
"format": "email",
"title": "Email",
"type": "string"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "name",
"title": "Name"
}
},
"required": [
"email"
],
"title": "generic.v0_2.Uploader",
"type": "object"
}
Fields:
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
Version
¤
Bases: RootModel[Union[str, int, float]]
flowchart TD
bioimageio.spec.model.v0_5.Version[Version]
click bioimageio.spec.model.v0_5.Version href "" "bioimageio.spec.model.v0_5.Version"
wraps a packaging.version.Version instance for validation in pydantic models
| METHOD | DESCRIPTION |
|---|---|
__eq__ |
|
__lt__ |
|
__str__ |
|
model_post_init |
set |
| ATTRIBUTE | DESCRIPTION |
|---|---|
base_version |
The "base version" of the version.
TYPE:
|
dev |
The development number of the version.
TYPE:
|
epoch |
The epoch of the version.
TYPE:
|
is_devrelease |
Whether this version is a development release.
TYPE:
|
is_postrelease |
Whether this version is a post-release.
TYPE:
|
is_prerelease |
Whether this version is a pre-release.
TYPE:
|
local |
The local version segment of the version.
TYPE:
|
major |
The first item of :attr:
TYPE:
|
micro |
The third item of :attr:
TYPE:
|
minor |
The second item of :attr:
TYPE:
|
post |
The post-release number of the version.
TYPE:
|
pre |
The pre-release segment of the version.
TYPE:
|
public |
The public portion of the version.
TYPE:
|
release |
The components of the "release" segment of the version.
TYPE:
|
base_version
property
¤
base_version: str
The "base version" of the version.
Version("1.2.3").base_version '1.2.3' Version("1.2.3+abc").base_version '1.2.3' Version("1!1.2.3+abc.dev1").base_version '1!1.2.3'
The "base version" is the public version of the project without any pre or post release markers.
dev
property
¤
dev: Optional[int]
The development number of the version.
print(Version("1.2.3").dev) None Version("1.2.3.dev1").dev 1
epoch
property
¤
epoch: int
The epoch of the version.
Version("2.0.0").epoch 0 Version("1!2.0.0").epoch 1
is_devrelease
property
¤
is_devrelease: bool
Whether this version is a development release.
Version("1.2.3").is_devrelease False Version("1.2.3.dev1").is_devrelease True
is_postrelease
property
¤
is_postrelease: bool
Whether this version is a post-release.
Version("1.2.3").is_postrelease False Version("1.2.3.post1").is_postrelease True
is_prerelease
property
¤
is_prerelease: bool
Whether this version is a pre-release.
Version("1.2.3").is_prerelease False Version("1.2.3a1").is_prerelease True Version("1.2.3b1").is_prerelease True Version("1.2.3rc1").is_prerelease True Version("1.2.3dev1").is_prerelease True
local
property
¤
local: Optional[str]
The local version segment of the version.
print(Version("1.2.3").local) None Version("1.2.3+abc").local 'abc'
major
property
¤
major: int
The first item of :attr:release or 0 if unavailable.
Version("1.2.3").major 1
micro
property
¤
micro: int
The third item of :attr:release or 0 if unavailable.
Version("1.2.3").micro 3 Version("1").micro 0
minor
property
¤
minor: int
The second item of :attr:release or 0 if unavailable.
Version("1.2.3").minor 2 Version("1").minor 0
post
property
¤
post: Optional[int]
The post-release number of the version.
print(Version("1.2.3").post) None Version("1.2.3.post1").post 1
pre
property
¤
pre: Optional[Tuple[str, int]]
The pre-release segment of the version.
print(Version("1.2.3").pre) None Version("1.2.3a1").pre ('a', 1) Version("1.2.3b1").pre ('b', 1) Version("1.2.3rc1").pre ('rc', 1)
public
property
¤
public: str
The public portion of the version.
Version("1.2.3").public '1.2.3' Version("1.2.3+abc").public '1.2.3' Version("1.2.3+abc.dev1").public '1.2.3'
release
property
¤
release: Tuple[int, ...]
The components of the "release" segment of the version.
Version("1.2.3").release (1, 2, 3) Version("2.0.0").release (2, 0, 0) Version("1!2.0.0.post0").release (2, 0, 0)
Includes trailing zeroes but not the epoch or any pre-release / development / post-release suffixes.
__eq__
¤
__eq__(other: Version)
Source code in src/bioimageio/spec/_internal/version_type.py
25 26 | |
__lt__
¤
__lt__(other: Version)
Source code in src/bioimageio/spec/_internal/version_type.py
22 23 | |
__str__
¤
__str__()
Source code in src/bioimageio/spec/_internal/version_type.py
14 15 | |
model_post_init
¤
model_post_init(__context: Any) -> None
set _version attribute @private
Source code in src/bioimageio/spec/_internal/version_type.py
17 18 19 20 | |
WeightsDescr
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"$defs": {
"ArchitectureFromFileDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Architecture source file",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
}
},
"required": [
"source",
"callable"
],
"title": "model.v0_5.ArchitectureFromFileDescr",
"type": "object"
},
"ArchitectureFromLibraryDescr": {
"additionalProperties": false,
"properties": {
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
},
"import_from": {
"description": "Where to import the callable from, i.e. `from <import_from> import <callable>`",
"title": "Import From",
"type": "string"
}
},
"required": [
"callable",
"import_from"
],
"title": "model.v0_5.ArchitectureFromLibraryDescr",
"type": "object"
},
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"FileDescr": {
"additionalProperties": false,
"description": "A file description",
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "File source",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
}
},
"required": [
"source"
],
"title": "_internal.io.FileDescr",
"type": "object"
},
"KerasHdf5WeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "TensorFlow version used to create these weights."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.KerasHdf5WeightsDescr",
"type": "object"
},
"OnnxWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"opset_version": {
"description": "ONNX opset version",
"minimum": 7,
"title": "Opset Version",
"type": "integer"
},
"external_data": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "weights.onnx.data"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Source of the external ONNX data file holding the weights.\n(If present **source** holds the ONNX architecture without weights)."
}
},
"required": [
"source",
"opset_version"
],
"title": "model.v0_5.OnnxWeightsDescr",
"type": "object"
},
"PytorchStateDictWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"architecture": {
"anyOf": [
{
"$ref": "#/$defs/ArchitectureFromFileDescr"
},
{
"$ref": "#/$defs/ArchitectureFromLibraryDescr"
}
],
"title": "Architecture"
},
"pytorch_version": {
"$ref": "#/$defs/Version",
"description": "Version of the PyTorch library used.\nIf `architecture.depencencies` is specified it has to include pytorch and any version pinning has to be compatible."
},
"dependencies": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "environment.yaml"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Custom depencies beyond pytorch described in a Conda environment file.\nAllows to specify custom dependencies, see conda docs:\n- [Exporting an environment file across platforms](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#exporting-an-environment-file-across-platforms)\n- [Creating an environment file manually](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually)\n\nThe conda environment file should include pytorch and any version pinning has to be compatible with\n**pytorch_version**."
}
},
"required": [
"source",
"architecture",
"pytorch_version"
],
"title": "model.v0_5.PytorchStateDictWeightsDescr",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"TensorflowJsWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "Version of the TensorFlow library used."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.TensorflowJsWeightsDescr",
"type": "object"
},
"TensorflowSavedModelBundleWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"tensorflow_version": {
"$ref": "#/$defs/Version",
"description": "Version of the TensorFlow library used."
},
"dependencies": {
"anyOf": [
{
"$ref": "#/$defs/FileDescr",
"examples": [
{
"source": "environment.yaml"
}
]
},
{
"type": "null"
}
],
"default": null,
"description": "Custom dependencies beyond tensorflow.\nShould include tensorflow and any version pinning has to be compatible with **tensorflow_version**."
}
},
"required": [
"source",
"tensorflow_version"
],
"title": "model.v0_5.TensorflowSavedModelBundleWeightsDescr",
"type": "object"
},
"TorchscriptWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
},
"pytorch_version": {
"$ref": "#/$defs/Version",
"description": "Version of the PyTorch library used."
}
},
"required": [
"source",
"pytorch_version"
],
"title": "model.v0_5.TorchscriptWeightsDescr",
"type": "object"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
},
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
}
},
"additionalProperties": false,
"properties": {
"keras_hdf5": {
"anyOf": [
{
"$ref": "#/$defs/KerasHdf5WeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"onnx": {
"anyOf": [
{
"$ref": "#/$defs/OnnxWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"pytorch_state_dict": {
"anyOf": [
{
"$ref": "#/$defs/PytorchStateDictWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"tensorflow_js": {
"anyOf": [
{
"$ref": "#/$defs/TensorflowJsWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"tensorflow_saved_model_bundle": {
"anyOf": [
{
"$ref": "#/$defs/TensorflowSavedModelBundleWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"torchscript": {
"anyOf": [
{
"$ref": "#/$defs/TorchscriptWeightsDescr"
},
{
"type": "null"
}
],
"default": null
}
},
"title": "model.v0_5.WeightsDescr",
"type": "object"
}
Fields:
-
keras_hdf5(Optional[KerasHdf5WeightsDescr]) -
onnx(Optional[OnnxWeightsDescr]) -
pytorch_state_dict(Optional[PytorchStateDictWeightsDescr]) -
tensorflow_js(Optional[TensorflowJsWeightsDescr]) -
tensorflow_saved_model_bundle(Optional[TensorflowSavedModelBundleWeightsDescr]) -
torchscript(Optional[TorchscriptWeightsDescr])
Validators:
pytorch_state_dict
pydantic-field
¤
pytorch_state_dict: Optional[
PytorchStateDictWeightsDescr
] = None
tensorflow_saved_model_bundle
pydantic-field
¤
tensorflow_saved_model_bundle: Optional[
TensorflowSavedModelBundleWeightsDescr
] = None
__getitem__
¤
__getitem__(
key: Literal[
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript",
],
)
Source code in src/bioimageio/spec/model/v0_5.py
2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 | |
__setitem__
¤
__setitem__(
key: Literal["keras_hdf5"],
value: Optional[KerasHdf5WeightsDescr],
) -> None
__setitem__(
key: Literal["onnx"], value: Optional[OnnxWeightsDescr]
) -> None
__setitem__(
key: Literal["pytorch_state_dict"],
value: Optional[PytorchStateDictWeightsDescr],
) -> None
__setitem__(
key: Literal["tensorflow_js"],
value: Optional[TensorflowJsWeightsDescr],
) -> None
__setitem__(
key: Literal["tensorflow_saved_model_bundle"],
value: Optional[TensorflowSavedModelBundleWeightsDescr],
) -> None
__setitem__(
key: Literal["torchscript"],
value: Optional[TorchscriptWeightsDescr],
) -> None
__setitem__(
key: Literal[
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript",
],
value: Optional[SpecificWeightsDescr],
)
Source code in src/bioimageio/spec/model/v0_5.py
2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 | |
check_entries
pydantic-validator
¤
check_entries() -> Self
Source code in src/bioimageio/spec/model/v0_5.py
2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
WeightsEntryDescrBase
pydantic-model
¤
Bases: FileDescr
Show JSON schema:
{
"$defs": {
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_3.Author",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
}
},
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "Source of the weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"comment": {
"default": "",
"description": "A comment about this weights entry, for example how these weights were created.",
"title": "Comment",
"type": "string"
}
},
"required": [
"source"
],
"title": "model.v0_5.WeightsEntryDescrBase",
"type": "object"
}
Fields:
-
sha256(Optional[Sha256]) -
source(FileSource) -
authors(Optional[List[Author]]) -
parent(Optional[WeightsFormat]) -
comment(str)
Validators:
-
_validate
authors
pydantic-field
¤
authors: Optional[List[Author]] = None
Authors
Either the person(s) that have trained this model resulting in the original weights file.
(If this is the initial weights entry, i.e. it does not have a parent)
Or the person(s) who have converted the weights to this weights format.
(If this is a child weight, i.e. it has a parent field)
comment
pydantic-field
¤
comment: str = ''
A comment about this weights entry, for example how these weights were created.
parent
pydantic-field
¤
parent: Optional[WeightsFormat] = None
The source weights these weights were converted from.
For example, if a model's weights were converted from the pytorch_state_dict format to torchscript,
The pytorch_state_dict weights entry has no parent and is the parent of the torchscript weights.
All weight entries except one (the initial set of weights resulting from training the model),
need to have this field.
download
¤
download(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
alias for .get_reader
Source code in src/bioimageio/spec/_internal/io.py
306 307 308 309 310 311 312 | |
get_reader
¤
get_reader(
*,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
)
open the file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
298 299 300 301 302 303 304 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
validate_sha256
¤
validate_sha256(force_recompute: bool = False) -> None
validate the sha256 hash value of the source file
Source code in src/bioimageio/spec/_internal/io.py
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 | |
WithHalo
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"$defs": {
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"halo": {
"description": "The halo should be cropped from the output tensor to avoid boundary effects.\nIt is to be cropped from both sides, i.e. `size_after_crop = size - 2 * halo`.\nTo document a halo that is already cropped by the model use `size.offset` instead.",
"minimum": 1,
"title": "Halo",
"type": "integer"
},
"size": {
"$ref": "#/$defs/SizeReference",
"description": "reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
]
}
},
"required": [
"halo",
"size"
],
"title": "model.v0_5.WithHalo",
"type": "object"
}
Fields:
-
halo(int) -
size(SizeReference)
halo
pydantic-field
¤
halo: int
The halo should be cropped from the output tensor to avoid boundary effects.
It is to be cropped from both sides, i.e. size_after_crop = size - 2 * halo.
To document a halo that is already cropped by the model use size.offset instead.
size
pydantic-field
¤
size: SizeReference
reference to another axis with an optional offset (see SizeReference)
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
WithSuffix
dataclass
¤
WithSuffix(
suffix: Union[LiteralString, Tuple[LiteralString, ...]],
case_sensitive: bool,
)
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
validate |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
case_sensitive |
TYPE:
|
suffix |
TYPE:
|
__get_pydantic_core_schema__
¤
__get_pydantic_core_schema__(
source: Type[Any], handler: GetCoreSchemaHandler
)
Source code in src/bioimageio/spec/_internal/io.py
325 326 327 328 329 330 331 332 333 334 335 | |
validate
¤
validate(
value: Union[FileSource, FileDescr],
) -> Union[FileSource, FileDescr]
Source code in src/bioimageio/spec/_internal/io.py
337 338 339 340 | |
ZeroMeanUnitVarianceDescr
pydantic-model
¤
Bases: ProcessingDescrBase
Subtract mean and divide by variance.
Examples:
Subtract tensor mean and variance - in YAML
preprocessing:
- id: zero_mean_unit_variance
>>> preprocessing = [ZeroMeanUnitVarianceDescr()]
Show JSON schema:
{
"$defs": {
"ZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ZeroMeanUnitVarianceDescr`",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Subtract mean and divide by variance.\n\nExamples:\n Subtract tensor mean and variance\n - in YAML\n ```yaml\n preprocessing:\n - id: zero_mean_unit_variance\n ```\n - in Python\n >>> preprocessing = [ZeroMeanUnitVarianceDescr()]",
"properties": {
"id": {
"const": "zero_mean_unit_variance",
"title": "Id",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
}
},
"required": [
"id"
],
"title": "model.v0_5.ZeroMeanUnitVarianceDescr",
"type": "object"
}
Fields:
-
id(Literal['zero_mean_unit_variance']) -
kwargs(ZeroMeanUnitVarianceKwargs)
implemented_id
class-attribute
¤
implemented_id: Literal["zero_mean_unit_variance"] = (
"zero_mean_unit_variance"
)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
ZeroMeanUnitVarianceKwargs
pydantic-model
¤
Bases: ProcessingKwargs
key word arguments for ZeroMeanUnitVarianceDescr
Show JSON schema:
{
"additionalProperties": false,
"description": "key word arguments for `ZeroMeanUnitVarianceDescr`",
"properties": {
"axes": {
"anyOf": [
{
"items": {
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.\nFor example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')\nresulting in a tensor of equal shape normalized per channel, specify `axes=('batch', 'x', 'y')`.\nTo normalize each sample independently leave out the 'batch' axis.\nDefault: Scale all axes jointly.",
"examples": [
[
"batch",
"x",
"y"
]
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"title": "model.v0_5.ZeroMeanUnitVarianceKwargs",
"type": "object"
}
Fields:
axes
pydantic-field
¤
axes: Optional[Sequence[AxisId]] = None
The subset of axes to normalize jointly, i.e. axes to reduce to compute mean/std.
For example to normalize 'batch', 'x' and 'y' jointly in a tensor ('batch', 'channel', 'y', 'x')
resulting in a tensor of equal shape normalized per channel, specify axes=('batch', 'x', 'y').
To normalize each sample independently leave out the 'batch' axis.
Default: Scale all axes jointly.
eps
pydantic-field
¤
eps: float = 1e-06
epsilon for numeric stability: out = (tensor - mean) / (std + eps).
__contains__
¤
__contains__(item: str) -> bool
Source code in src/bioimageio/spec/_internal/common_nodes.py
425 426 | |
__getitem__
¤
__getitem__(item: str) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
419 420 421 422 423 | |
get
¤
get(item: str, default: Any = None) -> Any
Source code in src/bioimageio/spec/_internal/common_nodes.py
416 417 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
_ArchFileConv
¤
_ArchFileConv(src: Type[SRC], tgt: Type[TGT])
Bases: Converter[_CallableFromFile_v0_4, ArchitectureFromFileDescr, Optional[Sha256], Dict[str, Any]]
flowchart TD
bioimageio.spec.model.v0_5._ArchFileConv[_ArchFileConv]
bioimageio.spec._internal.node_converter.Converter[Converter]
bioimageio.spec._internal.node_converter.Converter --> bioimageio.spec.model.v0_5._ArchFileConv
click bioimageio.spec.model.v0_5._ArchFileConv href "" "bioimageio.spec.model.v0_5._ArchFileConv"
click bioimageio.spec._internal.node_converter.Converter href "" "bioimageio.spec._internal.node_converter.Converter"
| METHOD | DESCRIPTION |
|---|---|
convert |
convert |
convert_as_dict |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
src |
TYPE:
|
tgt |
TYPE:
|
Source code in src/bioimageio/spec/_internal/node_converter.py
79 80 81 82 | |
convert
¤
convert(source: SRC, /, *args: Unpack[CArgs]) -> TGT
convert source node
| PARAMETER | DESCRIPTION |
|---|---|
|
A bioimageio description node
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
conversion failed |
Source code in src/bioimageio/spec/_internal/node_converter.py
92 93 94 95 96 97 98 99 100 101 102 | |
convert_as_dict
¤
convert_as_dict(
source: SRC, /, *args: Unpack[CArgs]
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node_converter.py
104 105 | |
_ArchLibConv
¤
_ArchLibConv(src: Type[SRC], tgt: Type[TGT])
Bases: Converter[_CallableFromDepencency_v0_4, ArchitectureFromLibraryDescr, Dict[str, Any]]
flowchart TD
bioimageio.spec.model.v0_5._ArchLibConv[_ArchLibConv]
bioimageio.spec._internal.node_converter.Converter[Converter]
bioimageio.spec._internal.node_converter.Converter --> bioimageio.spec.model.v0_5._ArchLibConv
click bioimageio.spec.model.v0_5._ArchLibConv href "" "bioimageio.spec.model.v0_5._ArchLibConv"
click bioimageio.spec._internal.node_converter.Converter href "" "bioimageio.spec._internal.node_converter.Converter"
| METHOD | DESCRIPTION |
|---|---|
convert |
convert |
convert_as_dict |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
src |
TYPE:
|
tgt |
TYPE:
|
Source code in src/bioimageio/spec/_internal/node_converter.py
79 80 81 82 | |
convert
¤
convert(source: SRC, /, *args: Unpack[CArgs]) -> TGT
convert source node
| PARAMETER | DESCRIPTION |
|---|---|
|
A bioimageio description node
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
conversion failed |
Source code in src/bioimageio/spec/_internal/node_converter.py
92 93 94 95 96 97 98 99 100 101 102 | |
convert_as_dict
¤
convert_as_dict(
source: SRC, /, *args: Unpack[CArgs]
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node_converter.py
104 105 | |
_ArchitectureCallableDescr
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"$defs": {
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
}
},
"additionalProperties": false,
"properties": {
"callable": {
"description": "Identifier of the callable that returns a torch.nn.Module instance.",
"examples": [
"MyNetworkClass",
"get_my_model"
],
"minLength": 1,
"title": "Identifier",
"type": "string"
},
"kwargs": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "key word arguments for the `callable`",
"title": "Kwargs",
"type": "object"
}
},
"required": [
"callable"
],
"title": "model.v0_5._ArchitectureCallableDescr",
"type": "object"
}
Fields:
-
callable(Identifier) -
kwargs(Dict[str, YamlValue])
callable
pydantic-field
¤
callable: Identifier
Identifier of the callable that returns a torch.nn.Module instance.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
_Author_v0_4
pydantic-model
¤
Bases: _Person
Show JSON schema:
{
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_2.Author",
"type": "object"
}
Fields:
-
affiliation(Optional[str]) -
email(Optional[EmailStr]) -
orcid(Optional[OrcidId]) -
name(str) -
github_user(Optional[str])
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
_AxisSizes
¤
Bases: NamedTuple
flowchart TD
bioimageio.spec.model.v0_5._AxisSizes[_AxisSizes]
click bioimageio.spec.model.v0_5._AxisSizes href "" "bioimageio.spec.model.v0_5._AxisSizes"
the lenghts of all axes of model inputs and outputs
| ATTRIBUTE | DESCRIPTION |
|---|---|
inputs |
|
outputs |
TYPE:
|
_BinarizeDescr_v0_4
pydantic-model
¤
Bases: ProcessingDescrBase
BinarizeDescr the tensor with a fixed BinarizeKwargs.threshold.
Values above the threshold will be set to one, values below the threshold to zero.
Show JSON schema:
{
"$defs": {
"BinarizeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `BinarizeDescr`",
"properties": {
"threshold": {
"description": "The fixed threshold",
"title": "Threshold",
"type": "number"
}
},
"required": [
"threshold"
],
"title": "model.v0_4.BinarizeKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "BinarizeDescr the tensor with a fixed `BinarizeKwargs.threshold`.\nValues above the threshold will be set to one, values below the threshold to zero.",
"properties": {
"name": {
"const": "binarize",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/BinarizeKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.BinarizeDescr",
"type": "object"
}
Fields:
-
name(Literal['binarize']) -
kwargs(BinarizeKwargs)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
_CallableFromDepencency_v0_4
¤
Bases: ValidatedStringWithInnerNode[CallableFromDepencencyNode]
flowchart TD
bioimageio.spec.model.v0_5._CallableFromDepencency_v0_4[_CallableFromDepencency_v0_4]
bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode[ValidatedStringWithInnerNode]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode --> bioimageio.spec.model.v0_5._CallableFromDepencency_v0_4
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode
click bioimageio.spec.model.v0_5._CallableFromDepencency_v0_4 href "" "bioimageio.spec.model.v0_5._CallableFromDepencency_v0_4"
click bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode href "" "bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
callable_name |
The callable Python identifier implemented in module module_name.
|
module_name |
The Python module that implements callable_name.
|
root_model |
TYPE:
|
callable_name
property
¤
callable_name
The callable Python identifier implemented in module module_name.
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
_CallableFromFile_v0_4
¤
Bases: ValidatedStringWithInnerNode[CallableFromFileNode]
flowchart TD
bioimageio.spec.model.v0_5._CallableFromFile_v0_4[_CallableFromFile_v0_4]
bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode[ValidatedStringWithInnerNode]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode --> bioimageio.spec.model.v0_5._CallableFromFile_v0_4
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode
click bioimageio.spec.model.v0_5._CallableFromFile_v0_4 href "" "bioimageio.spec.model.v0_5._CallableFromFile_v0_4"
click bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode href "" "bioimageio.spec._internal.validated_string_with_inner_node.ValidatedStringWithInnerNode"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
callable_name |
The callable Python identifier implemented in source_file.
|
root_model |
TYPE:
|
source_file |
The Python source file that implements callable_name.
|
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
_ClipDescr_v0_4
pydantic-model
¤
Bases: ProcessingDescrBase
Clip tensor values to a range.
Set tensor values below ClipKwargs.min to ClipKwargs.min
and above ClipKwargs.max to ClipKwargs.max.
Show JSON schema:
{
"$defs": {
"ClipKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ClipDescr`",
"properties": {
"min": {
"description": "minimum value for clipping",
"title": "Min",
"type": "number"
},
"max": {
"description": "maximum value for clipping",
"title": "Max",
"type": "number"
}
},
"required": [
"min",
"max"
],
"title": "model.v0_4.ClipKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Clip tensor values to a range.\n\nSet tensor values below `ClipKwargs.min` to `ClipKwargs.min`\nand above `ClipKwargs.max` to `ClipKwargs.max`.",
"properties": {
"name": {
"const": "clip",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ClipKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ClipDescr",
"type": "object"
}
Fields:
-
name(Literal['clip']) -
kwargs(ClipKwargs)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
_DataDepSize
¤
_ImplicitOutputShape_v0_4
pydantic-model
¤
Bases: Node
Output tensor shape depending on an input tensor shape.
shape(output_tensor) = shape(input_tensor) * scale + 2 * offset
Show JSON schema:
{
"additionalProperties": false,
"description": "Output tensor shape depending on an input tensor shape.\n`shape(output_tensor) = shape(input_tensor) * scale + 2 * offset`",
"properties": {
"reference_tensor": {
"description": "Name of the reference tensor.",
"minLength": 1,
"title": "TensorName",
"type": "string"
},
"scale": {
"description": "output_pix/input_pix for each dimension.\n'null' values indicate new dimensions, whose length is defined by 2*`offset`",
"items": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
},
"minItems": 1,
"title": "Scale",
"type": "array"
},
"offset": {
"description": "Position of origin wrt to input.",
"items": {
"anyOf": [
{
"type": "integer"
},
{
"multipleOf": 0.5,
"type": "number"
}
]
},
"minItems": 1,
"title": "Offset",
"type": "array"
}
},
"required": [
"reference_tensor",
"scale",
"offset"
],
"title": "model.v0_4.ImplicitOutputShape",
"type": "object"
}
Fields:
-
reference_tensor(TensorName) -
scale(NotEmpty[List[Optional[float]]]) -
offset(NotEmpty[List[Union[int, float]]])
Validators:
scale
pydantic-field
¤
scale: NotEmpty[List[Optional[float]]]
output_pix/input_pix for each dimension.
'null' values indicate new dimensions, whose length is defined by 2*offset
__len__
¤
__len__() -> int
Source code in src/bioimageio/spec/model/v0_4.py
581 582 | |
matching_lengths
pydantic-validator
¤
matching_lengths() -> Self
Source code in src/bioimageio/spec/model/v0_4.py
584 585 586 587 588 589 590 591 592 593 594 595 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
_InputTensorConv
¤
_InputTensorConv(src: Type[SRC], tgt: Type[TGT])
Bases: Converter[_InputTensorDescr_v0_4, InputTensorDescr, FileSource_, Optional[FileSource_], Mapping[_TensorName_v0_4, Mapping[str, int]]]
flowchart TD
bioimageio.spec.model.v0_5._InputTensorConv[_InputTensorConv]
bioimageio.spec._internal.node_converter.Converter[Converter]
bioimageio.spec._internal.node_converter.Converter --> bioimageio.spec.model.v0_5._InputTensorConv
click bioimageio.spec.model.v0_5._InputTensorConv href "" "bioimageio.spec.model.v0_5._InputTensorConv"
click bioimageio.spec._internal.node_converter.Converter href "" "bioimageio.spec._internal.node_converter.Converter"
| METHOD | DESCRIPTION |
|---|---|
convert |
convert |
convert_as_dict |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
src |
TYPE:
|
tgt |
TYPE:
|
Source code in src/bioimageio/spec/_internal/node_converter.py
79 80 81 82 | |
convert
¤
convert(source: SRC, /, *args: Unpack[CArgs]) -> TGT
convert source node
| PARAMETER | DESCRIPTION |
|---|---|
|
A bioimageio description node
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
conversion failed |
Source code in src/bioimageio/spec/_internal/node_converter.py
92 93 94 95 96 97 98 99 100 101 102 | |
convert_as_dict
¤
convert_as_dict(
source: SRC, /, *args: Unpack[CArgs]
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node_converter.py
104 105 | |
_InputTensorDescr_v0_4
pydantic-model
¤
Bases: TensorDescrBase
Show JSON schema:
{
"$defs": {
"BinarizeDescr": {
"additionalProperties": false,
"description": "BinarizeDescr the tensor with a fixed `BinarizeKwargs.threshold`.\nValues above the threshold will be set to one, values below the threshold to zero.",
"properties": {
"name": {
"const": "binarize",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/BinarizeKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.BinarizeDescr",
"type": "object"
},
"BinarizeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `BinarizeDescr`",
"properties": {
"threshold": {
"description": "The fixed threshold",
"title": "Threshold",
"type": "number"
}
},
"required": [
"threshold"
],
"title": "model.v0_4.BinarizeKwargs",
"type": "object"
},
"ClipDescr": {
"additionalProperties": false,
"description": "Clip tensor values to a range.\n\nSet tensor values below `ClipKwargs.min` to `ClipKwargs.min`\nand above `ClipKwargs.max` to `ClipKwargs.max`.",
"properties": {
"name": {
"const": "clip",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ClipKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ClipDescr",
"type": "object"
},
"ClipKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ClipDescr`",
"properties": {
"min": {
"description": "minimum value for clipping",
"title": "Min",
"type": "number"
},
"max": {
"description": "maximum value for clipping",
"title": "Max",
"type": "number"
}
},
"required": [
"min",
"max"
],
"title": "model.v0_4.ClipKwargs",
"type": "object"
},
"ParameterizedInputShape": {
"additionalProperties": false,
"description": "A sequence of valid shapes given by `shape_k = min + k * step for k in {0, 1, ...}`.",
"properties": {
"min": {
"description": "The minimum input shape",
"items": {
"type": "integer"
},
"minItems": 1,
"title": "Min",
"type": "array"
},
"step": {
"description": "The minimum shape change",
"items": {
"type": "integer"
},
"minItems": 1,
"title": "Step",
"type": "array"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_4.ParameterizedInputShape",
"type": "object"
},
"ScaleLinearDescr": {
"additionalProperties": false,
"description": "Fixed linear scaling.",
"properties": {
"name": {
"const": "scale_linear",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleLinearKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ScaleLinearDescr",
"type": "object"
},
"ScaleLinearKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleLinearDescr`",
"properties": {
"axes": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to scale jointly.\nFor example xy to scale the two image axes for 2d data jointly.",
"examples": [
"xy"
],
"title": "Axes"
},
"gain": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"type": "array"
}
],
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"type": "array"
}
],
"default": 0.0,
"description": "additive term",
"title": "Offset"
}
},
"title": "model.v0_4.ScaleLinearKwargs",
"type": "object"
},
"ScaleRangeDescr": {
"additionalProperties": false,
"description": "Scale with percentiles.",
"properties": {
"name": {
"const": "scale_range",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleRangeKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ScaleRangeDescr",
"type": "object"
},
"ScaleRangeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleRangeDescr`\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
"properties": {
"mode": {
"description": "Mode for computing percentiles.\n| mode | description |\n| ----------- | ------------------------------------ |\n| per_dataset | compute for the entire dataset |\n| per_sample | compute for each sample individually |",
"enum": [
"per_dataset",
"per_sample"
],
"title": "Mode",
"type": "string"
},
"axes": {
"description": "The subset of axes to normalize jointly.\nFor example xy to normalize the two image axes for 2d data jointly.",
"examples": [
"xy"
],
"title": "Axes",
"type": "string"
},
"min_percentile": {
"anyOf": [
{
"type": "integer"
},
{
"type": "number"
}
],
"default": 0.0,
"description": "The lower percentile used to determine the value to align with zero.",
"ge": 0,
"lt": 100,
"title": "Min Percentile"
},
"max_percentile": {
"anyOf": [
{
"type": "integer"
},
{
"type": "number"
}
],
"default": 100.0,
"description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
"gt": 1,
"le": 100,
"title": "Max Percentile"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
},
"reference_tensor": {
"anyOf": [
{
"minLength": 1,
"title": "TensorName",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Tensor name to compute the percentiles from. Default: The tensor itself.\nFor any tensor in `inputs` only input tensor references are allowed.\nFor a tensor in `outputs` only input tensor refereences are allowed if `mode: per_dataset`",
"title": "Reference Tensor"
}
},
"required": [
"mode",
"axes"
],
"title": "model.v0_4.ScaleRangeKwargs",
"type": "object"
},
"SigmoidDescr": {
"additionalProperties": false,
"description": "The logistic sigmoid funciton, a.k.a. expit function.",
"properties": {
"name": {
"const": "sigmoid",
"title": "Name",
"type": "string"
}
},
"required": [
"name"
],
"title": "model.v0_4.SigmoidDescr",
"type": "object"
},
"ZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract mean and divide by variance.",
"properties": {
"name": {
"const": "zero_mean_unit_variance",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ZeroMeanUnitVarianceDescr",
"type": "object"
},
"ZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ZeroMeanUnitVarianceDescr`",
"properties": {
"mode": {
"default": "fixed",
"description": "Mode for computing mean and variance.\n| mode | description |\n| ----------- | ------------------------------------ |\n| fixed | Fixed values for mean and variance |\n| per_dataset | Compute for the entire dataset |\n| per_sample | Compute for each sample individually |",
"enum": [
"fixed",
"per_dataset",
"per_sample"
],
"title": "Mode",
"type": "string"
},
"axes": {
"description": "The subset of axes to normalize jointly.\nFor example `xy` to normalize the two image axes for 2d data jointly.",
"examples": [
"xy"
],
"title": "Axes",
"type": "string"
},
"mean": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The mean value(s) to use for `mode: fixed`.\nFor example `[1.1, 2.2, 3.3]` in the case of a 3 channel image with `axes: xy`.",
"examples": [
[
1.1,
2.2,
3.3
]
],
"title": "Mean"
},
"std": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The standard deviation values to use for `mode: fixed`. Analogous to mean.",
"examples": [
[
0.1,
0.2,
0.3
]
],
"title": "Std"
},
"eps": {
"default": 1e-06,
"description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"axes"
],
"title": "model.v0_4.ZeroMeanUnitVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"name": {
"description": "Tensor name. No duplicates are allowed.",
"minLength": 1,
"title": "TensorName",
"type": "string"
},
"description": {
"default": "",
"title": "Description",
"type": "string"
},
"axes": {
"description": "Axes identifying characters. Same length and order as the axes in `shape`.\n| axis | description |\n| --- | --- |\n| b | batch (groups multiple samples) |\n| i | instance/index/element |\n| t | time |\n| c | channel |\n| z | spatial dimension z |\n| y | spatial dimension y |\n| x | spatial dimension x |",
"title": "Axes",
"type": "string"
},
"data_range": {
"anyOf": [
{
"maxItems": 2,
"minItems": 2,
"prefixItems": [
{
"type": "number"
},
{
"type": "number"
}
],
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\nIf not specified, the full data range that can be expressed in `data_type` is allowed.",
"title": "Data Range"
},
"data_type": {
"description": "For now an input tensor is expected to be given as `float32`.\nThe data flow in bioimage.io models is explained\n[in this diagram.](https://docs.google.com/drawings/d/1FTw8-Rn6a6nXdkZ_SkMumtcjvur9mtIhRqLwnKqZNHM/edit).",
"enum": [
"float32",
"uint8",
"uint16"
],
"title": "Data Type",
"type": "string"
},
"shape": {
"anyOf": [
{
"items": {
"type": "integer"
},
"type": "array"
},
{
"$ref": "#/$defs/ParameterizedInputShape"
}
],
"description": "Specification of input tensor shape.",
"examples": [
[
1,
512,
512,
1
],
{
"min": [
1,
64,
64,
1
],
"step": [
0,
32,
32,
0
]
}
],
"title": "Shape"
},
"preprocessing": {
"description": "Description of how this input should be preprocessed.",
"items": {
"discriminator": {
"mapping": {
"binarize": "#/$defs/BinarizeDescr",
"clip": "#/$defs/ClipDescr",
"scale_linear": "#/$defs/ScaleLinearDescr",
"scale_range": "#/$defs/ScaleRangeDescr",
"sigmoid": "#/$defs/SigmoidDescr",
"zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
},
"propertyName": "name"
},
"oneOf": [
{
"$ref": "#/$defs/BinarizeDescr"
},
{
"$ref": "#/$defs/ClipDescr"
},
{
"$ref": "#/$defs/ScaleLinearDescr"
},
{
"$ref": "#/$defs/SigmoidDescr"
},
{
"$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
},
{
"$ref": "#/$defs/ScaleRangeDescr"
}
]
},
"title": "Preprocessing",
"type": "array"
}
},
"required": [
"name",
"axes",
"data_type",
"shape"
],
"title": "model.v0_4.InputTensorDescr",
"type": "object"
}
Fields:
-
name(TensorName) -
description(str) -
axes(AxesStr) -
data_range(Optional[Tuple[float, float]]) -
data_type(Literal['float32', 'uint8', 'uint16']) -
shape(Union[Sequence[int], ParameterizedInputShape]) -
preprocessing(List[PreprocessingDescr])
Validators:
axes
pydantic-field
¤
axes: AxesStr
Axes identifying characters. Same length and order as the axes in shape.
| axis | description |
| --- | --- |
| b | batch (groups multiple samples) |
| i | instance/index/element |
| t | time |
| c | channel |
| z | spatial dimension z |
| y | spatial dimension y |
| x | spatial dimension x |
data_range
pydantic-field
¤
data_range: Optional[Tuple[float, float]] = None
Tuple (minimum, maximum) specifying the allowed range of the data in this tensor.
If not specified, the full data range that can be expressed in data_type is allowed.
data_type
pydantic-field
¤
data_type: Literal['float32', 'uint8', 'uint16']
For now an input tensor is expected to be given as float32.
The data flow in bioimage.io models is explained
in this diagram..
preprocessing
pydantic-field
¤
preprocessing: List[PreprocessingDescr]
Description of how this input should be preprocessed.
shape
pydantic-field
¤
shape: Union[Sequence[int], ParameterizedInputShape]
Specification of input tensor shape.
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
validate_preprocessing_kwargs
pydantic-validator
¤
validate_preprocessing_kwargs() -> Self
Source code in src/bioimageio/spec/model/v0_4.py
960 961 962 963 964 965 966 967 968 969 | |
zero_batch_step_and_one_batch_size
pydantic-validator
¤
zero_batch_step_and_one_batch_size() -> Self
Source code in src/bioimageio/spec/model/v0_4.py
936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 | |
_ModelConv
¤
_ModelConv(src: Type[SRC], tgt: Type[TGT])
Bases: Converter[_ModelDescr_v0_4, ModelDescr]
flowchart TD
bioimageio.spec.model.v0_5._ModelConv[_ModelConv]
bioimageio.spec._internal.node_converter.Converter[Converter]
bioimageio.spec._internal.node_converter.Converter --> bioimageio.spec.model.v0_5._ModelConv
click bioimageio.spec.model.v0_5._ModelConv href "" "bioimageio.spec.model.v0_5._ModelConv"
click bioimageio.spec._internal.node_converter.Converter href "" "bioimageio.spec._internal.node_converter.Converter"
| METHOD | DESCRIPTION |
|---|---|
convert |
convert |
convert_as_dict |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
src |
TYPE:
|
tgt |
TYPE:
|
Source code in src/bioimageio/spec/_internal/node_converter.py
79 80 81 82 | |
convert
¤
convert(source: SRC, /, *args: Unpack[CArgs]) -> TGT
convert source node
| PARAMETER | DESCRIPTION |
|---|---|
|
A bioimageio description node
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
conversion failed |
Source code in src/bioimageio/spec/_internal/node_converter.py
92 93 94 95 96 97 98 99 100 101 102 | |
convert_as_dict
¤
convert_as_dict(
source: SRC, /, *args: Unpack[CArgs]
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node_converter.py
104 105 | |
_ModelDescr_v0_4
pydantic-model
¤
Bases: GenericModelDescrBase
Specification of the fields used in a bioimage.io-compliant RDF that describes AI models with pretrained weights.
These fields are typically stored in a YAML file which we call a model resource description file (model RDF).
Show JSON schema:
{
"$defs": {
"AttachmentsDescr": {
"additionalProperties": true,
"properties": {
"files": {
"description": "File attachments",
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Files",
"type": "array"
}
},
"title": "generic.v0_2.AttachmentsDescr",
"type": "object"
},
"Author": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"title": "Name",
"type": "string"
},
"github_user": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Github User"
}
},
"required": [
"name"
],
"title": "generic.v0_2.Author",
"type": "object"
},
"BadgeDescr": {
"additionalProperties": false,
"description": "A custom badge",
"properties": {
"label": {
"description": "badge label to display on hover",
"examples": [
"Open in Colab"
],
"title": "Label",
"type": "string"
},
"icon": {
"anyOf": [
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "badge icon (included in bioimage.io package if not a URL)",
"examples": [
"https://colab.research.google.com/assets/colab-badge.svg"
],
"title": "Icon"
},
"url": {
"description": "target URL",
"examples": [
"https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb"
],
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
}
},
"required": [
"label",
"url"
],
"title": "generic.v0_2.BadgeDescr",
"type": "object"
},
"BinarizeDescr": {
"additionalProperties": false,
"description": "BinarizeDescr the tensor with a fixed `BinarizeKwargs.threshold`.\nValues above the threshold will be set to one, values below the threshold to zero.",
"properties": {
"name": {
"const": "binarize",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/BinarizeKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.BinarizeDescr",
"type": "object"
},
"BinarizeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `BinarizeDescr`",
"properties": {
"threshold": {
"description": "The fixed threshold",
"title": "Threshold",
"type": "number"
}
},
"required": [
"threshold"
],
"title": "model.v0_4.BinarizeKwargs",
"type": "object"
},
"CiteEntry": {
"additionalProperties": false,
"properties": {
"text": {
"description": "free text description",
"title": "Text",
"type": "string"
},
"doi": {
"anyOf": [
{
"description": "A digital object identifier, see https://www.doi.org/",
"pattern": "^10\\.[0-9]{4}.+$",
"title": "Doi",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A digital object identifier (DOI) is the prefered citation reference.\nSee https://www.doi.org/ for details. (alternatively specify `url`)",
"title": "Doi"
},
"url": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to cite (preferably specify a `doi` instead)",
"title": "Url"
}
},
"required": [
"text"
],
"title": "generic.v0_2.CiteEntry",
"type": "object"
},
"ClipDescr": {
"additionalProperties": false,
"description": "Clip tensor values to a range.\n\nSet tensor values below `ClipKwargs.min` to `ClipKwargs.min`\nand above `ClipKwargs.max` to `ClipKwargs.max`.",
"properties": {
"name": {
"const": "clip",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ClipKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ClipDescr",
"type": "object"
},
"ClipKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ClipDescr`",
"properties": {
"min": {
"description": "minimum value for clipping",
"title": "Min",
"type": "number"
},
"max": {
"description": "maximum value for clipping",
"title": "Max",
"type": "number"
}
},
"required": [
"min",
"max"
],
"title": "model.v0_4.ClipKwargs",
"type": "object"
},
"DatasetDescr": {
"additionalProperties": false,
"description": "A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage\nprocessing.",
"properties": {
"name": {
"description": "A human-friendly name of the resource description",
"minLength": 1,
"title": "Name",
"type": "string"
},
"description": {
"title": "Description",
"type": "string"
},
"covers": {
"description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg', '.tif', '.tiff')",
"examples": [
[
"cover.png"
]
],
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Covers",
"type": "array"
},
"id_emoji": {
"anyOf": [
{
"examples": [
"\ud83e\udd88",
"\ud83e\udda5"
],
"maxLength": 1,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "UTF-8 emoji for display alongside the `id`.",
"title": "Id Emoji"
},
"authors": {
"description": "The authors are the creators of the RDF and the primary points of contact.",
"items": {
"$ref": "#/$defs/Author"
},
"title": "Authors",
"type": "array"
},
"attachments": {
"anyOf": [
{
"$ref": "#/$defs/AttachmentsDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "file and other attachments"
},
"cite": {
"description": "citations",
"items": {
"$ref": "#/$defs/CiteEntry"
},
"title": "Cite",
"type": "array"
},
"config": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "A field for custom configuration that can contain any keys not present in the RDF spec.\nThis means you should not store, for example, a github repo URL in `config` since we already have the\n`git_repo` field defined in the spec.\nKeys in `config` may be very specific to a tool or consumer software. To avoid conflicting definitions,\nit is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,\nfor example:\n```yaml\nconfig:\n bioimageio: # here is the domain name\n my_custom_key: 3837283\n another_key:\n nested: value\n imagej: # config specific to ImageJ\n macro_dir: path/to/macro/file\n```\nIf possible, please use [`snake_case`](https://en.wikipedia.org/wiki/Snake_case) for keys in `config`.\nYou may want to list linked files additionally under `attachments` to include them when packaging a resource\n(packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains\nan altered rdf.yaml file with local references to the downloaded files)",
"examples": [
{
"bioimageio": {
"another_key": {
"nested": "value"
},
"my_custom_key": 3837283
},
"imagej": {
"macro_dir": "path/to/macro/file"
}
}
],
"title": "Config",
"type": "object"
},
"download_url": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to download the resource from (deprecated)",
"title": "Download Url"
},
"git_repo": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A URL to the Git repository where the resource is being developed.",
"examples": [
"https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
],
"title": "Git Repo"
},
"icon": {
"anyOf": [
{
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An icon for illustration",
"title": "Icon"
},
"links": {
"description": "IDs of other bioimage.io resources",
"examples": [
[
"ilastik/ilastik",
"deepimagej/deepimagej",
"zero/notebook_u-net_3d_zerocostdl4mic"
]
],
"items": {
"type": "string"
},
"title": "Links",
"type": "array"
},
"uploader": {
"anyOf": [
{
"$ref": "#/$defs/Uploader"
},
{
"type": "null"
}
],
"default": null,
"description": "The person who uploaded the model (e.g. to bioimage.io)"
},
"maintainers": {
"description": "Maintainers of this resource.\nIf not specified `authors` are maintainers and at least some of them should specify their `github_user` name",
"items": {
"$ref": "#/$defs/Maintainer"
},
"title": "Maintainers",
"type": "array"
},
"rdf_source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Resource description file (RDF) source; used to keep track of where an rdf.yaml was loaded from.\nDo not set this field in a YAML file.",
"title": "Rdf Source"
},
"tags": {
"description": "Associated tags",
"examples": [
[
"unet2d",
"pytorch",
"nucleus",
"segmentation",
"dsb2018"
]
],
"items": {
"type": "string"
},
"title": "Tags",
"type": "array"
},
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the resource following SemVer 2.0."
},
"version_number": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "version number (n-th published version, not the semantic version)",
"title": "Version Number"
},
"format_version": {
"const": "0.2.4",
"description": "The format version of this resource specification\n(not the `version` of the resource description)\nWhen creating a new resource always use the latest micro/patch version described here.\nThe `format_version` is important for any consumer software to understand how to parse the fields.",
"title": "Format Version",
"type": "string"
},
"badges": {
"description": "badges associated with this resource",
"items": {
"$ref": "#/$defs/BadgeDescr"
},
"title": "Badges",
"type": "array"
},
"documentation": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL or relative path to a markdown file with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.",
"examples": [
"https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
"README.md"
],
"title": "Documentation"
},
"license": {
"anyOf": [
{
"enum": [
"0BSD",
"AAL",
"Abstyles",
"AdaCore-doc",
"Adobe-2006",
"Adobe-Display-PostScript",
"Adobe-Glyph",
"Adobe-Utopia",
"ADSL",
"AFL-1.1",
"AFL-1.2",
"AFL-2.0",
"AFL-2.1",
"AFL-3.0",
"Afmparse",
"AGPL-1.0-only",
"AGPL-1.0-or-later",
"AGPL-3.0-only",
"AGPL-3.0-or-later",
"Aladdin",
"AMDPLPA",
"AML",
"AML-glslang",
"AMPAS",
"ANTLR-PD",
"ANTLR-PD-fallback",
"Apache-1.0",
"Apache-1.1",
"Apache-2.0",
"APAFML",
"APL-1.0",
"App-s2p",
"APSL-1.0",
"APSL-1.1",
"APSL-1.2",
"APSL-2.0",
"Arphic-1999",
"Artistic-1.0",
"Artistic-1.0-cl8",
"Artistic-1.0-Perl",
"Artistic-2.0",
"ASWF-Digital-Assets-1.0",
"ASWF-Digital-Assets-1.1",
"Baekmuk",
"Bahyph",
"Barr",
"bcrypt-Solar-Designer",
"Beerware",
"Bitstream-Charter",
"Bitstream-Vera",
"BitTorrent-1.0",
"BitTorrent-1.1",
"blessing",
"BlueOak-1.0.0",
"Boehm-GC",
"Borceux",
"Brian-Gladman-2-Clause",
"Brian-Gladman-3-Clause",
"BSD-1-Clause",
"BSD-2-Clause",
"BSD-2-Clause-Darwin",
"BSD-2-Clause-Patent",
"BSD-2-Clause-Views",
"BSD-3-Clause",
"BSD-3-Clause-acpica",
"BSD-3-Clause-Attribution",
"BSD-3-Clause-Clear",
"BSD-3-Clause-flex",
"BSD-3-Clause-HP",
"BSD-3-Clause-LBNL",
"BSD-3-Clause-Modification",
"BSD-3-Clause-No-Military-License",
"BSD-3-Clause-No-Nuclear-License",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-No-Nuclear-Warranty",
"BSD-3-Clause-Open-MPI",
"BSD-3-Clause-Sun",
"BSD-4-Clause",
"BSD-4-Clause-Shortened",
"BSD-4-Clause-UC",
"BSD-4.3RENO",
"BSD-4.3TAHOE",
"BSD-Advertising-Acknowledgement",
"BSD-Attribution-HPND-disclaimer",
"BSD-Inferno-Nettverk",
"BSD-Protection",
"BSD-Source-beginning-file",
"BSD-Source-Code",
"BSD-Systemics",
"BSD-Systemics-W3Works",
"BSL-1.0",
"BUSL-1.1",
"bzip2-1.0.6",
"C-UDA-1.0",
"CAL-1.0",
"CAL-1.0-Combined-Work-Exception",
"Caldera",
"Caldera-no-preamble",
"CATOSL-1.1",
"CC-BY-1.0",
"CC-BY-2.0",
"CC-BY-2.5",
"CC-BY-2.5-AU",
"CC-BY-3.0",
"CC-BY-3.0-AT",
"CC-BY-3.0-AU",
"CC-BY-3.0-DE",
"CC-BY-3.0-IGO",
"CC-BY-3.0-NL",
"CC-BY-3.0-US",
"CC-BY-4.0",
"CC-BY-NC-1.0",
"CC-BY-NC-2.0",
"CC-BY-NC-2.5",
"CC-BY-NC-3.0",
"CC-BY-NC-3.0-DE",
"CC-BY-NC-4.0",
"CC-BY-NC-ND-1.0",
"CC-BY-NC-ND-2.0",
"CC-BY-NC-ND-2.5",
"CC-BY-NC-ND-3.0",
"CC-BY-NC-ND-3.0-DE",
"CC-BY-NC-ND-3.0-IGO",
"CC-BY-NC-ND-4.0",
"CC-BY-NC-SA-1.0",
"CC-BY-NC-SA-2.0",
"CC-BY-NC-SA-2.0-DE",
"CC-BY-NC-SA-2.0-FR",
"CC-BY-NC-SA-2.0-UK",
"CC-BY-NC-SA-2.5",
"CC-BY-NC-SA-3.0",
"CC-BY-NC-SA-3.0-DE",
"CC-BY-NC-SA-3.0-IGO",
"CC-BY-NC-SA-4.0",
"CC-BY-ND-1.0",
"CC-BY-ND-2.0",
"CC-BY-ND-2.5",
"CC-BY-ND-3.0",
"CC-BY-ND-3.0-DE",
"CC-BY-ND-4.0",
"CC-BY-SA-1.0",
"CC-BY-SA-2.0",
"CC-BY-SA-2.0-UK",
"CC-BY-SA-2.1-JP",
"CC-BY-SA-2.5",
"CC-BY-SA-3.0",
"CC-BY-SA-3.0-AT",
"CC-BY-SA-3.0-DE",
"CC-BY-SA-3.0-IGO",
"CC-BY-SA-4.0",
"CC-PDDC",
"CC0-1.0",
"CDDL-1.0",
"CDDL-1.1",
"CDL-1.0",
"CDLA-Permissive-1.0",
"CDLA-Permissive-2.0",
"CDLA-Sharing-1.0",
"CECILL-1.0",
"CECILL-1.1",
"CECILL-2.0",
"CECILL-2.1",
"CECILL-B",
"CECILL-C",
"CERN-OHL-1.1",
"CERN-OHL-1.2",
"CERN-OHL-P-2.0",
"CERN-OHL-S-2.0",
"CERN-OHL-W-2.0",
"CFITSIO",
"check-cvs",
"checkmk",
"ClArtistic",
"Clips",
"CMU-Mach",
"CMU-Mach-nodoc",
"CNRI-Jython",
"CNRI-Python",
"CNRI-Python-GPL-Compatible",
"COIL-1.0",
"Community-Spec-1.0",
"Condor-1.1",
"copyleft-next-0.3.0",
"copyleft-next-0.3.1",
"Cornell-Lossless-JPEG",
"CPAL-1.0",
"CPL-1.0",
"CPOL-1.02",
"Cronyx",
"Crossword",
"CrystalStacker",
"CUA-OPL-1.0",
"Cube",
"curl",
"D-FSL-1.0",
"DEC-3-Clause",
"diffmark",
"DL-DE-BY-2.0",
"DL-DE-ZERO-2.0",
"DOC",
"Dotseqn",
"DRL-1.0",
"DRL-1.1",
"DSDP",
"dtoa",
"dvipdfm",
"ECL-1.0",
"ECL-2.0",
"EFL-1.0",
"EFL-2.0",
"eGenix",
"Elastic-2.0",
"Entessa",
"EPICS",
"EPL-1.0",
"EPL-2.0",
"ErlPL-1.1",
"etalab-2.0",
"EUDatagrid",
"EUPL-1.0",
"EUPL-1.1",
"EUPL-1.2",
"Eurosym",
"Fair",
"FBM",
"FDK-AAC",
"Ferguson-Twofish",
"Frameworx-1.0",
"FreeBSD-DOC",
"FreeImage",
"FSFAP",
"FSFAP-no-warranty-disclaimer",
"FSFUL",
"FSFULLR",
"FSFULLRWD",
"FTL",
"Furuseth",
"fwlw",
"GCR-docs",
"GD",
"GFDL-1.1-invariants-only",
"GFDL-1.1-invariants-or-later",
"GFDL-1.1-no-invariants-only",
"GFDL-1.1-no-invariants-or-later",
"GFDL-1.1-only",
"GFDL-1.1-or-later",
"GFDL-1.2-invariants-only",
"GFDL-1.2-invariants-or-later",
"GFDL-1.2-no-invariants-only",
"GFDL-1.2-no-invariants-or-later",
"GFDL-1.2-only",
"GFDL-1.2-or-later",
"GFDL-1.3-invariants-only",
"GFDL-1.3-invariants-or-later",
"GFDL-1.3-no-invariants-only",
"GFDL-1.3-no-invariants-or-later",
"GFDL-1.3-only",
"GFDL-1.3-or-later",
"Giftware",
"GL2PS",
"Glide",
"Glulxe",
"GLWTPL",
"gnuplot",
"GPL-1.0-only",
"GPL-1.0-or-later",
"GPL-2.0-only",
"GPL-2.0-or-later",
"GPL-3.0-only",
"GPL-3.0-or-later",
"Graphics-Gems",
"gSOAP-1.3b",
"gtkbook",
"HaskellReport",
"hdparm",
"Hippocratic-2.1",
"HP-1986",
"HP-1989",
"HPND",
"HPND-DEC",
"HPND-doc",
"HPND-doc-sell",
"HPND-export-US",
"HPND-export-US-modify",
"HPND-Fenneberg-Livingston",
"HPND-INRIA-IMAG",
"HPND-Kevlin-Henney",
"HPND-Markus-Kuhn",
"HPND-MIT-disclaimer",
"HPND-Pbmplus",
"HPND-sell-MIT-disclaimer-xserver",
"HPND-sell-regexpr",
"HPND-sell-variant",
"HPND-sell-variant-MIT-disclaimer",
"HPND-UC",
"HTMLTIDY",
"IBM-pibs",
"ICU",
"IEC-Code-Components-EULA",
"IJG",
"IJG-short",
"ImageMagick",
"iMatix",
"Imlib2",
"Info-ZIP",
"Inner-Net-2.0",
"Intel",
"Intel-ACPI",
"Interbase-1.0",
"IPA",
"IPL-1.0",
"ISC",
"ISC-Veillard",
"Jam",
"JasPer-2.0",
"JPL-image",
"JPNIC",
"JSON",
"Kastrup",
"Kazlib",
"Knuth-CTAN",
"LAL-1.2",
"LAL-1.3",
"Latex2e",
"Latex2e-translated-notice",
"Leptonica",
"LGPL-2.0-only",
"LGPL-2.0-or-later",
"LGPL-2.1-only",
"LGPL-2.1-or-later",
"LGPL-3.0-only",
"LGPL-3.0-or-later",
"LGPLLR",
"Libpng",
"libpng-2.0",
"libselinux-1.0",
"libtiff",
"libutil-David-Nugent",
"LiLiQ-P-1.1",
"LiLiQ-R-1.1",
"LiLiQ-Rplus-1.1",
"Linux-man-pages-1-para",
"Linux-man-pages-copyleft",
"Linux-man-pages-copyleft-2-para",
"Linux-man-pages-copyleft-var",
"Linux-OpenIB",
"LOOP",
"LPD-document",
"LPL-1.0",
"LPL-1.02",
"LPPL-1.0",
"LPPL-1.1",
"LPPL-1.2",
"LPPL-1.3a",
"LPPL-1.3c",
"lsof",
"Lucida-Bitmap-Fonts",
"LZMA-SDK-9.11-to-9.20",
"LZMA-SDK-9.22",
"Mackerras-3-Clause",
"Mackerras-3-Clause-acknowledgment",
"magaz",
"mailprio",
"MakeIndex",
"Martin-Birgmeier",
"McPhee-slideshow",
"metamail",
"Minpack",
"MirOS",
"MIT",
"MIT-0",
"MIT-advertising",
"MIT-CMU",
"MIT-enna",
"MIT-feh",
"MIT-Festival",
"MIT-Modern-Variant",
"MIT-open-group",
"MIT-testregex",
"MIT-Wu",
"MITNFA",
"MMIXware",
"Motosoto",
"MPEG-SSG",
"mpi-permissive",
"mpich2",
"MPL-1.0",
"MPL-1.1",
"MPL-2.0",
"MPL-2.0-no-copyleft-exception",
"mplus",
"MS-LPL",
"MS-PL",
"MS-RL",
"MTLL",
"MulanPSL-1.0",
"MulanPSL-2.0",
"Multics",
"Mup",
"NAIST-2003",
"NASA-1.3",
"Naumen",
"NBPL-1.0",
"NCGL-UK-2.0",
"NCSA",
"Net-SNMP",
"NetCDF",
"Newsletr",
"NGPL",
"NICTA-1.0",
"NIST-PD",
"NIST-PD-fallback",
"NIST-Software",
"NLOD-1.0",
"NLOD-2.0",
"NLPL",
"Nokia",
"NOSL",
"Noweb",
"NPL-1.0",
"NPL-1.1",
"NPOSL-3.0",
"NRL",
"NTP",
"NTP-0",
"O-UDA-1.0",
"OCCT-PL",
"OCLC-2.0",
"ODbL-1.0",
"ODC-By-1.0",
"OFFIS",
"OFL-1.0",
"OFL-1.0-no-RFN",
"OFL-1.0-RFN",
"OFL-1.1",
"OFL-1.1-no-RFN",
"OFL-1.1-RFN",
"OGC-1.0",
"OGDL-Taiwan-1.0",
"OGL-Canada-2.0",
"OGL-UK-1.0",
"OGL-UK-2.0",
"OGL-UK-3.0",
"OGTSL",
"OLDAP-1.1",
"OLDAP-1.2",
"OLDAP-1.3",
"OLDAP-1.4",
"OLDAP-2.0",
"OLDAP-2.0.1",
"OLDAP-2.1",
"OLDAP-2.2",
"OLDAP-2.2.1",
"OLDAP-2.2.2",
"OLDAP-2.3",
"OLDAP-2.4",
"OLDAP-2.5",
"OLDAP-2.6",
"OLDAP-2.7",
"OLDAP-2.8",
"OLFL-1.3",
"OML",
"OpenPBS-2.3",
"OpenSSL",
"OpenSSL-standalone",
"OpenVision",
"OPL-1.0",
"OPL-UK-3.0",
"OPUBL-1.0",
"OSET-PL-2.1",
"OSL-1.0",
"OSL-1.1",
"OSL-2.0",
"OSL-2.1",
"OSL-3.0",
"PADL",
"Parity-6.0.0",
"Parity-7.0.0",
"PDDL-1.0",
"PHP-3.0",
"PHP-3.01",
"Pixar",
"Plexus",
"pnmstitch",
"PolyForm-Noncommercial-1.0.0",
"PolyForm-Small-Business-1.0.0",
"PostgreSQL",
"PSF-2.0",
"psfrag",
"psutils",
"Python-2.0",
"Python-2.0.1",
"python-ldap",
"Qhull",
"QPL-1.0",
"QPL-1.0-INRIA-2004",
"radvd",
"Rdisc",
"RHeCos-1.1",
"RPL-1.1",
"RPL-1.5",
"RPSL-1.0",
"RSA-MD",
"RSCPL",
"Ruby",
"SAX-PD",
"SAX-PD-2.0",
"Saxpath",
"SCEA",
"SchemeReport",
"Sendmail",
"Sendmail-8.23",
"SGI-B-1.0",
"SGI-B-1.1",
"SGI-B-2.0",
"SGI-OpenGL",
"SGP4",
"SHL-0.5",
"SHL-0.51",
"SimPL-2.0",
"SISSL",
"SISSL-1.2",
"SL",
"Sleepycat",
"SMLNJ",
"SMPPL",
"SNIA",
"snprintf",
"softSurfer",
"Soundex",
"Spencer-86",
"Spencer-94",
"Spencer-99",
"SPL-1.0",
"ssh-keyscan",
"SSH-OpenSSH",
"SSH-short",
"SSLeay-standalone",
"SSPL-1.0",
"SugarCRM-1.1.3",
"Sun-PPP",
"SunPro",
"SWL",
"swrule",
"Symlinks",
"TAPR-OHL-1.0",
"TCL",
"TCP-wrappers",
"TermReadKey",
"TGPPL-1.0",
"TMate",
"TORQUE-1.1",
"TOSL",
"TPDL",
"TPL-1.0",
"TTWL",
"TTYP0",
"TU-Berlin-1.0",
"TU-Berlin-2.0",
"UCAR",
"UCL-1.0",
"ulem",
"UMich-Merit",
"Unicode-3.0",
"Unicode-DFS-2015",
"Unicode-DFS-2016",
"Unicode-TOU",
"UnixCrypt",
"Unlicense",
"UPL-1.0",
"URT-RLE",
"Vim",
"VOSTROM",
"VSL-1.0",
"W3C",
"W3C-19980720",
"W3C-20150513",
"w3m",
"Watcom-1.0",
"Widget-Workshop",
"Wsuipa",
"WTFPL",
"X11",
"X11-distribute-modifications-variant",
"Xdebug-1.03",
"Xerox",
"Xfig",
"XFree86-1.1",
"xinetd",
"xkeyboard-config-Zinoviev",
"xlock",
"Xnet",
"xpp",
"XSkat",
"YPL-1.0",
"YPL-1.1",
"Zed",
"Zeeff",
"Zend-2.0",
"Zimbra-1.3",
"Zimbra-1.4",
"Zlib",
"zlib-acknowledgement",
"ZPL-1.1",
"ZPL-2.0",
"ZPL-2.1"
],
"title": "LicenseId",
"type": "string"
},
{
"enum": [
"AGPL-1.0",
"AGPL-3.0",
"BSD-2-Clause-FreeBSD",
"BSD-2-Clause-NetBSD",
"bzip2-1.0.5",
"eCos-2.0",
"GFDL-1.1",
"GFDL-1.2",
"GFDL-1.3",
"GPL-1.0",
"GPL-1.0+",
"GPL-2.0",
"GPL-2.0+",
"GPL-2.0-with-autoconf-exception",
"GPL-2.0-with-bison-exception",
"GPL-2.0-with-classpath-exception",
"GPL-2.0-with-font-exception",
"GPL-2.0-with-GCC-exception",
"GPL-3.0",
"GPL-3.0+",
"GPL-3.0-with-autoconf-exception",
"GPL-3.0-with-GCC-exception",
"LGPL-2.0",
"LGPL-2.0+",
"LGPL-2.1",
"LGPL-2.1+",
"LGPL-3.0",
"LGPL-3.0+",
"Nunit",
"StandardML-NJ",
"wxWindows"
],
"title": "DeprecatedLicenseId",
"type": "string"
},
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do not support custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose\n) to discuss your intentions with the community.",
"examples": [
"CC0-1.0",
"MIT",
"BSD-2-Clause"
],
"title": "License"
},
"type": {
"const": "dataset",
"title": "Type",
"type": "string"
},
"id": {
"anyOf": [
{
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
"title": "Id"
},
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "\"URL to the source of the dataset.",
"title": "Source"
}
},
"required": [
"name",
"description",
"format_version",
"type"
],
"title": "dataset 0.2.4",
"type": "object"
},
"Datetime": {
"description": "Timestamp in [ISO 8601](#https://en.wikipedia.org/wiki/ISO_8601) format\nwith a few restrictions listed [here](https://docs.python.org/3/library/datetime.html#datetime.datetime.fromisoformat).",
"format": "date-time",
"title": "Datetime",
"type": "string"
},
"ImplicitOutputShape": {
"additionalProperties": false,
"description": "Output tensor shape depending on an input tensor shape.\n`shape(output_tensor) = shape(input_tensor) * scale + 2 * offset`",
"properties": {
"reference_tensor": {
"description": "Name of the reference tensor.",
"minLength": 1,
"title": "TensorName",
"type": "string"
},
"scale": {
"description": "output_pix/input_pix for each dimension.\n'null' values indicate new dimensions, whose length is defined by 2*`offset`",
"items": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
},
"minItems": 1,
"title": "Scale",
"type": "array"
},
"offset": {
"description": "Position of origin wrt to input.",
"items": {
"anyOf": [
{
"type": "integer"
},
{
"multipleOf": 0.5,
"type": "number"
}
]
},
"minItems": 1,
"title": "Offset",
"type": "array"
}
},
"required": [
"reference_tensor",
"scale",
"offset"
],
"title": "model.v0_4.ImplicitOutputShape",
"type": "object"
},
"InputTensorDescr": {
"additionalProperties": false,
"properties": {
"name": {
"description": "Tensor name. No duplicates are allowed.",
"minLength": 1,
"title": "TensorName",
"type": "string"
},
"description": {
"default": "",
"title": "Description",
"type": "string"
},
"axes": {
"description": "Axes identifying characters. Same length and order as the axes in `shape`.\n| axis | description |\n| --- | --- |\n| b | batch (groups multiple samples) |\n| i | instance/index/element |\n| t | time |\n| c | channel |\n| z | spatial dimension z |\n| y | spatial dimension y |\n| x | spatial dimension x |",
"title": "Axes",
"type": "string"
},
"data_range": {
"anyOf": [
{
"maxItems": 2,
"minItems": 2,
"prefixItems": [
{
"type": "number"
},
{
"type": "number"
}
],
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\nIf not specified, the full data range that can be expressed in `data_type` is allowed.",
"title": "Data Range"
},
"data_type": {
"description": "For now an input tensor is expected to be given as `float32`.\nThe data flow in bioimage.io models is explained\n[in this diagram.](https://docs.google.com/drawings/d/1FTw8-Rn6a6nXdkZ_SkMumtcjvur9mtIhRqLwnKqZNHM/edit).",
"enum": [
"float32",
"uint8",
"uint16"
],
"title": "Data Type",
"type": "string"
},
"shape": {
"anyOf": [
{
"items": {
"type": "integer"
},
"type": "array"
},
{
"$ref": "#/$defs/ParameterizedInputShape"
}
],
"description": "Specification of input tensor shape.",
"examples": [
[
1,
512,
512,
1
],
{
"min": [
1,
64,
64,
1
],
"step": [
0,
32,
32,
0
]
}
],
"title": "Shape"
},
"preprocessing": {
"description": "Description of how this input should be preprocessed.",
"items": {
"discriminator": {
"mapping": {
"binarize": "#/$defs/BinarizeDescr",
"clip": "#/$defs/ClipDescr",
"scale_linear": "#/$defs/ScaleLinearDescr",
"scale_range": "#/$defs/ScaleRangeDescr",
"sigmoid": "#/$defs/SigmoidDescr",
"zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
},
"propertyName": "name"
},
"oneOf": [
{
"$ref": "#/$defs/BinarizeDescr"
},
{
"$ref": "#/$defs/ClipDescr"
},
{
"$ref": "#/$defs/ScaleLinearDescr"
},
{
"$ref": "#/$defs/SigmoidDescr"
},
{
"$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
},
{
"$ref": "#/$defs/ScaleRangeDescr"
}
]
},
"title": "Preprocessing",
"type": "array"
}
},
"required": [
"name",
"axes",
"data_type",
"shape"
],
"title": "model.v0_4.InputTensorDescr",
"type": "object"
},
"KerasHdf5WeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"attachments": {
"anyOf": [
{
"$ref": "#/$defs/AttachmentsDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "Attachments that are specific to this weights entry."
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"dependencies": {
"anyOf": [
{
"pattern": "^.+:.+$",
"title": "Dependencies",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Dependency manager and dependency file, specified as `<dependency manager>:<relative file path>`.",
"examples": [
"conda:environment.yaml",
"maven:./pom.xml",
"pip:./requirements.txt"
],
"title": "Dependencies"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"tensorflow_version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "TensorFlow version used to create these weights"
}
},
"required": [
"source"
],
"title": "model.v0_4.KerasHdf5WeightsDescr",
"type": "object"
},
"LinkedDataset": {
"additionalProperties": false,
"description": "Reference to a bioimage.io dataset.",
"properties": {
"id": {
"description": "A valid dataset `id` from the bioimage.io collection.",
"minLength": 1,
"title": "DatasetId",
"type": "string"
},
"version_number": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "version number (n-th published version, not the semantic version) of linked dataset",
"title": "Version Number"
}
},
"required": [
"id"
],
"title": "dataset.v0_2.LinkedDataset",
"type": "object"
},
"LinkedModel": {
"additionalProperties": false,
"description": "Reference to a bioimage.io model.",
"properties": {
"id": {
"description": "A valid model `id` from the bioimage.io collection.",
"examples": [
"affable-shark",
"ambitious-sloth"
],
"minLength": 1,
"title": "ModelId",
"type": "string"
},
"version_number": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "version number (n-th published version, not the semantic version) of linked model",
"title": "Version Number"
}
},
"required": [
"id"
],
"title": "model.v0_4.LinkedModel",
"type": "object"
},
"Maintainer": {
"additionalProperties": false,
"properties": {
"affiliation": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Affiliation",
"title": "Affiliation"
},
"email": {
"anyOf": [
{
"format": "email",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Email",
"title": "Email"
},
"orcid": {
"anyOf": [
{
"description": "An ORCID identifier, see https://orcid.org/",
"title": "OrcidId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An [ORCID iD](https://support.orcid.org/hc/en-us/sections/360001495313-What-is-ORCID\n) in hyphenated groups of 4 digits, (and [valid](\nhttps://support.orcid.org/hc/en-us/articles/360006897674-Structure-of-the-ORCID-Identifier\n) as per ISO 7064 11,2.)",
"examples": [
"0000-0001-2345-6789"
],
"title": "Orcid"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Name"
},
"github_user": {
"title": "Github User",
"type": "string"
}
},
"required": [
"github_user"
],
"title": "generic.v0_2.Maintainer",
"type": "object"
},
"OnnxWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"attachments": {
"anyOf": [
{
"$ref": "#/$defs/AttachmentsDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "Attachments that are specific to this weights entry."
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"dependencies": {
"anyOf": [
{
"pattern": "^.+:.+$",
"title": "Dependencies",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Dependency manager and dependency file, specified as `<dependency manager>:<relative file path>`.",
"examples": [
"conda:environment.yaml",
"maven:./pom.xml",
"pip:./requirements.txt"
],
"title": "Dependencies"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"opset_version": {
"anyOf": [
{
"minimum": 7,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "ONNX opset version",
"title": "Opset Version"
}
},
"required": [
"source"
],
"title": "model.v0_4.OnnxWeightsDescr",
"type": "object"
},
"OutputTensorDescr": {
"additionalProperties": false,
"properties": {
"name": {
"description": "Tensor name. No duplicates are allowed.",
"minLength": 1,
"title": "TensorName",
"type": "string"
},
"description": {
"default": "",
"title": "Description",
"type": "string"
},
"axes": {
"description": "Axes identifying characters. Same length and order as the axes in `shape`.\n| axis | description |\n| --- | --- |\n| b | batch (groups multiple samples) |\n| i | instance/index/element |\n| t | time |\n| c | channel |\n| z | spatial dimension z |\n| y | spatial dimension y |\n| x | spatial dimension x |",
"title": "Axes",
"type": "string"
},
"data_range": {
"anyOf": [
{
"maxItems": 2,
"minItems": 2,
"prefixItems": [
{
"type": "number"
},
{
"type": "number"
}
],
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\nIf not specified, the full data range that can be expressed in `data_type` is allowed.",
"title": "Data Range"
},
"data_type": {
"description": "Data type.\nThe data flow in bioimage.io models is explained\n[in this diagram.](https://docs.google.com/drawings/d/1FTw8-Rn6a6nXdkZ_SkMumtcjvur9mtIhRqLwnKqZNHM/edit).",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"title": "Data Type",
"type": "string"
},
"shape": {
"anyOf": [
{
"items": {
"type": "integer"
},
"type": "array"
},
{
"$ref": "#/$defs/ImplicitOutputShape"
}
],
"description": "Output tensor shape.",
"title": "Shape"
},
"halo": {
"anyOf": [
{
"items": {
"type": "integer"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The `halo` that should be cropped from the output tensor to avoid boundary effects.\nThe `halo` is to be cropped from both sides, i.e. `shape_after_crop = shape - 2 * halo`.\nTo document a `halo` that is already cropped by the model `shape.offset` has to be used instead.",
"title": "Halo"
},
"postprocessing": {
"description": "Description of how this output should be postprocessed.",
"items": {
"discriminator": {
"mapping": {
"binarize": "#/$defs/BinarizeDescr",
"clip": "#/$defs/ClipDescr",
"scale_linear": "#/$defs/ScaleLinearDescr",
"scale_mean_variance": "#/$defs/ScaleMeanVarianceDescr",
"scale_range": "#/$defs/ScaleRangeDescr",
"sigmoid": "#/$defs/SigmoidDescr",
"zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
},
"propertyName": "name"
},
"oneOf": [
{
"$ref": "#/$defs/BinarizeDescr"
},
{
"$ref": "#/$defs/ClipDescr"
},
{
"$ref": "#/$defs/ScaleLinearDescr"
},
{
"$ref": "#/$defs/SigmoidDescr"
},
{
"$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
},
{
"$ref": "#/$defs/ScaleRangeDescr"
},
{
"$ref": "#/$defs/ScaleMeanVarianceDescr"
}
]
},
"title": "Postprocessing",
"type": "array"
}
},
"required": [
"name",
"axes",
"data_type",
"shape"
],
"title": "model.v0_4.OutputTensorDescr",
"type": "object"
},
"ParameterizedInputShape": {
"additionalProperties": false,
"description": "A sequence of valid shapes given by `shape_k = min + k * step for k in {0, 1, ...}`.",
"properties": {
"min": {
"description": "The minimum input shape",
"items": {
"type": "integer"
},
"minItems": 1,
"title": "Min",
"type": "array"
},
"step": {
"description": "The minimum shape change",
"items": {
"type": "integer"
},
"minItems": 1,
"title": "Step",
"type": "array"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_4.ParameterizedInputShape",
"type": "object"
},
"PytorchStateDictWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"attachments": {
"anyOf": [
{
"$ref": "#/$defs/AttachmentsDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "Attachments that are specific to this weights entry."
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"dependencies": {
"anyOf": [
{
"pattern": "^.+:.+$",
"title": "Dependencies",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Dependency manager and dependency file, specified as `<dependency manager>:<relative file path>`.",
"examples": [
"conda:environment.yaml",
"maven:./pom.xml",
"pip:./requirements.txt"
],
"title": "Dependencies"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"architecture": {
"anyOf": [
{
"pattern": "^.+:.+$",
"title": "CallableFromFile",
"type": "string"
},
{
"pattern": "^.+\\..+$",
"title": "CallableFromDepencency",
"type": "string"
}
],
"description": "callable returning a torch.nn.Module instance.\nLocal implementation: `<relative path to file>:<identifier of implementation within the file>`.\nImplementation in a dependency: `<dependency-package>.<[dependency-module]>.<identifier>`.",
"examples": [
"my_function.py:MyNetworkClass",
"my_module.submodule.get_my_model"
],
"title": "Architecture"
},
"architecture_sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The SHA256 of the architecture source file, if the architecture is not defined in a module listed in `dependencies`\nYou can drag and drop your file to this\n[online tool](http://emn178.github.io/online-tools/sha256_checksum.html) to generate a SHA256 in your browser.\nOr you can generate a SHA256 checksum with Python's `hashlib`,\n[here is a codesnippet](https://gist.github.com/FynnBe/e64460463df89439cff218bbf59c1100).",
"title": "Architecture Sha256"
},
"kwargs": {
"additionalProperties": true,
"description": "key word arguments for the `architecture` callable",
"title": "Kwargs",
"type": "object"
},
"pytorch_version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "Version of the PyTorch library used.\nIf `depencencies` is specified it should include pytorch and the verison has to match.\n(`dependencies` overrules `pytorch_version`)"
}
},
"required": [
"source",
"architecture"
],
"title": "model.v0_4.PytorchStateDictWeightsDescr",
"type": "object"
},
"RelativeFilePath": {
"description": "A path relative to the `rdf.yaml` file (also if the RDF source is a URL).",
"format": "path",
"title": "RelativeFilePath",
"type": "string"
},
"RunMode": {
"additionalProperties": false,
"properties": {
"name": {
"anyOf": [
{
"const": "deepimagej",
"type": "string"
},
{
"type": "string"
}
],
"description": "Run mode name",
"title": "Name"
},
"kwargs": {
"additionalProperties": true,
"description": "Run mode specific key word arguments",
"title": "Kwargs",
"type": "object"
}
},
"required": [
"name"
],
"title": "model.v0_4.RunMode",
"type": "object"
},
"ScaleLinearDescr": {
"additionalProperties": false,
"description": "Fixed linear scaling.",
"properties": {
"name": {
"const": "scale_linear",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleLinearKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ScaleLinearDescr",
"type": "object"
},
"ScaleLinearKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleLinearDescr`",
"properties": {
"axes": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to scale jointly.\nFor example xy to scale the two image axes for 2d data jointly.",
"examples": [
"xy"
],
"title": "Axes"
},
"gain": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"type": "array"
}
],
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"type": "array"
}
],
"default": 0.0,
"description": "additive term",
"title": "Offset"
}
},
"title": "model.v0_4.ScaleLinearKwargs",
"type": "object"
},
"ScaleMeanVarianceDescr": {
"additionalProperties": false,
"description": "Scale the tensor s.t. its mean and variance match a reference tensor.",
"properties": {
"name": {
"const": "scale_mean_variance",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleMeanVarianceKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ScaleMeanVarianceDescr",
"type": "object"
},
"ScaleMeanVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleMeanVarianceDescr`",
"properties": {
"mode": {
"description": "Mode for computing mean and variance.\n| mode | description |\n| ----------- | ------------------------------------ |\n| per_dataset | Compute for the entire dataset |\n| per_sample | Compute for each sample individually |",
"enum": [
"per_dataset",
"per_sample"
],
"title": "Mode",
"type": "string"
},
"reference_tensor": {
"description": "Name of tensor to match.",
"minLength": 1,
"title": "TensorName",
"type": "string"
},
"axes": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to scale jointly.\nFor example xy to normalize the two image axes for 2d data jointly.\nDefault: scale all non-batch axes jointly.",
"examples": [
"xy"
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability:\n\"`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"mode",
"reference_tensor"
],
"title": "model.v0_4.ScaleMeanVarianceKwargs",
"type": "object"
},
"ScaleRangeDescr": {
"additionalProperties": false,
"description": "Scale with percentiles.",
"properties": {
"name": {
"const": "scale_range",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleRangeKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ScaleRangeDescr",
"type": "object"
},
"ScaleRangeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleRangeDescr`\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
"properties": {
"mode": {
"description": "Mode for computing percentiles.\n| mode | description |\n| ----------- | ------------------------------------ |\n| per_dataset | compute for the entire dataset |\n| per_sample | compute for each sample individually |",
"enum": [
"per_dataset",
"per_sample"
],
"title": "Mode",
"type": "string"
},
"axes": {
"description": "The subset of axes to normalize jointly.\nFor example xy to normalize the two image axes for 2d data jointly.",
"examples": [
"xy"
],
"title": "Axes",
"type": "string"
},
"min_percentile": {
"anyOf": [
{
"type": "integer"
},
{
"type": "number"
}
],
"default": 0.0,
"description": "The lower percentile used to determine the value to align with zero.",
"ge": 0,
"lt": 100,
"title": "Min Percentile"
},
"max_percentile": {
"anyOf": [
{
"type": "integer"
},
{
"type": "number"
}
],
"default": 100.0,
"description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
"gt": 1,
"le": 100,
"title": "Max Percentile"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
},
"reference_tensor": {
"anyOf": [
{
"minLength": 1,
"title": "TensorName",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Tensor name to compute the percentiles from. Default: The tensor itself.\nFor any tensor in `inputs` only input tensor references are allowed.\nFor a tensor in `outputs` only input tensor refereences are allowed if `mode: per_dataset`",
"title": "Reference Tensor"
}
},
"required": [
"mode",
"axes"
],
"title": "model.v0_4.ScaleRangeKwargs",
"type": "object"
},
"SigmoidDescr": {
"additionalProperties": false,
"description": "The logistic sigmoid funciton, a.k.a. expit function.",
"properties": {
"name": {
"const": "sigmoid",
"title": "Name",
"type": "string"
}
},
"required": [
"name"
],
"title": "model.v0_4.SigmoidDescr",
"type": "object"
},
"TensorflowJsWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The multi-file weights.\nAll required files/folders should be a zip archive.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"attachments": {
"anyOf": [
{
"$ref": "#/$defs/AttachmentsDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "Attachments that are specific to this weights entry."
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"dependencies": {
"anyOf": [
{
"pattern": "^.+:.+$",
"title": "Dependencies",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Dependency manager and dependency file, specified as `<dependency manager>:<relative file path>`.",
"examples": [
"conda:environment.yaml",
"maven:./pom.xml",
"pip:./requirements.txt"
],
"title": "Dependencies"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"tensorflow_version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "Version of the TensorFlow library used."
}
},
"required": [
"source"
],
"title": "model.v0_4.TensorflowJsWeightsDescr",
"type": "object"
},
"TensorflowSavedModelBundleWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"attachments": {
"anyOf": [
{
"$ref": "#/$defs/AttachmentsDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "Attachments that are specific to this weights entry."
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"dependencies": {
"anyOf": [
{
"pattern": "^.+:.+$",
"title": "Dependencies",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Dependency manager and dependency file, specified as `<dependency manager>:<relative file path>`.",
"examples": [
"conda:environment.yaml",
"maven:./pom.xml",
"pip:./requirements.txt"
],
"title": "Dependencies"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"tensorflow_version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "Version of the TensorFlow library used."
}
},
"required": [
"source"
],
"title": "model.v0_4.TensorflowSavedModelBundleWeightsDescr",
"type": "object"
},
"TorchscriptWeightsDescr": {
"additionalProperties": false,
"properties": {
"source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "The weights file.",
"title": "Source"
},
"sha256": {
"anyOf": [
{
"description": "A SHA-256 hash value",
"maxLength": 64,
"minLength": 64,
"title": "Sha256",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "SHA256 hash value of the **source** file.",
"title": "Sha256"
},
"attachments": {
"anyOf": [
{
"$ref": "#/$defs/AttachmentsDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "Attachments that are specific to this weights entry."
},
"authors": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Author"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Authors\nEither the person(s) that have trained this model resulting in the original weights file.\n (If this is the initial weights entry, i.e. it does not have a `parent`)\nOr the person(s) who have converted the weights to this weights format.\n (If this is a child weight, i.e. it has a `parent` field)",
"title": "Authors"
},
"dependencies": {
"anyOf": [
{
"pattern": "^.+:.+$",
"title": "Dependencies",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Dependency manager and dependency file, specified as `<dependency manager>:<relative file path>`.",
"examples": [
"conda:environment.yaml",
"maven:./pom.xml",
"pip:./requirements.txt"
],
"title": "Dependencies"
},
"parent": {
"anyOf": [
{
"enum": [
"keras_hdf5",
"onnx",
"pytorch_state_dict",
"tensorflow_js",
"tensorflow_saved_model_bundle",
"torchscript"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The source weights these weights were converted from.\nFor example, if a model's weights were converted from the `pytorch_state_dict` format to `torchscript`,\nThe `pytorch_state_dict` weights entry has no `parent` and is the parent of the `torchscript` weights.\nAll weight entries except one (the initial set of weights resulting from training the model),\nneed to have this field.",
"examples": [
"pytorch_state_dict"
],
"title": "Parent"
},
"pytorch_version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "Version of the PyTorch library used."
}
},
"required": [
"source"
],
"title": "model.v0_4.TorchscriptWeightsDescr",
"type": "object"
},
"Uploader": {
"additionalProperties": false,
"properties": {
"email": {
"description": "Email",
"format": "email",
"title": "Email",
"type": "string"
},
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "name",
"title": "Name"
}
},
"required": [
"email"
],
"title": "generic.v0_2.Uploader",
"type": "object"
},
"Version": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
}
],
"description": "wraps a packaging.version.Version instance for validation in pydantic models",
"title": "Version"
},
"WeightsDescr": {
"additionalProperties": false,
"properties": {
"keras_hdf5": {
"anyOf": [
{
"$ref": "#/$defs/KerasHdf5WeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"onnx": {
"anyOf": [
{
"$ref": "#/$defs/OnnxWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"pytorch_state_dict": {
"anyOf": [
{
"$ref": "#/$defs/PytorchStateDictWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"tensorflow_js": {
"anyOf": [
{
"$ref": "#/$defs/TensorflowJsWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"tensorflow_saved_model_bundle": {
"anyOf": [
{
"$ref": "#/$defs/TensorflowSavedModelBundleWeightsDescr"
},
{
"type": "null"
}
],
"default": null
},
"torchscript": {
"anyOf": [
{
"$ref": "#/$defs/TorchscriptWeightsDescr"
},
{
"type": "null"
}
],
"default": null
}
},
"title": "model.v0_4.WeightsDescr",
"type": "object"
},
"YamlValue": {
"anyOf": [
{
"type": "boolean"
},
{
"format": "date",
"type": "string"
},
{
"format": "date-time",
"type": "string"
},
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "string"
},
{
"items": {
"$ref": "#/$defs/YamlValue"
},
"type": "array"
},
{
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"type": "object"
},
{
"type": "null"
}
]
},
"ZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract mean and divide by variance.",
"properties": {
"name": {
"const": "zero_mean_unit_variance",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ZeroMeanUnitVarianceDescr",
"type": "object"
},
"ZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ZeroMeanUnitVarianceDescr`",
"properties": {
"mode": {
"default": "fixed",
"description": "Mode for computing mean and variance.\n| mode | description |\n| ----------- | ------------------------------------ |\n| fixed | Fixed values for mean and variance |\n| per_dataset | Compute for the entire dataset |\n| per_sample | Compute for each sample individually |",
"enum": [
"fixed",
"per_dataset",
"per_sample"
],
"title": "Mode",
"type": "string"
},
"axes": {
"description": "The subset of axes to normalize jointly.\nFor example `xy` to normalize the two image axes for 2d data jointly.",
"examples": [
"xy"
],
"title": "Axes",
"type": "string"
},
"mean": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The mean value(s) to use for `mode: fixed`.\nFor example `[1.1, 2.2, 3.3]` in the case of a 3 channel image with `axes: xy`.",
"examples": [
[
1.1,
2.2,
3.3
]
],
"title": "Mean"
},
"std": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The standard deviation values to use for `mode: fixed`. Analogous to mean.",
"examples": [
[
0.1,
0.2,
0.3
]
],
"title": "Std"
},
"eps": {
"default": 1e-06,
"description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"axes"
],
"title": "model.v0_4.ZeroMeanUnitVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Specification of the fields used in a bioimage.io-compliant RDF that describes AI models with pretrained weights.\n\nThese fields are typically stored in a YAML file which we call a model resource description file (model RDF).",
"properties": {
"name": {
"description": "A human-readable name of this model.\nIt should be no longer than 64 characters and only contain letter, number, underscore, minus or space characters.",
"minLength": 1,
"title": "Name",
"type": "string"
},
"description": {
"title": "Description",
"type": "string"
},
"covers": {
"description": "Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1.\nThe supported image formats are: ('.gif', '.jpeg', '.jpg', '.png', '.svg', '.tif', '.tiff')",
"examples": [
[
"cover.png"
]
],
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Covers",
"type": "array"
},
"id_emoji": {
"anyOf": [
{
"examples": [
"\ud83e\udd88",
"\ud83e\udda5"
],
"maxLength": 1,
"minLength": 1,
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "UTF-8 emoji for display alongside the `id`.",
"title": "Id Emoji"
},
"authors": {
"description": "The authors are the creators of the model RDF and the primary points of contact.",
"items": {
"$ref": "#/$defs/Author"
},
"minItems": 1,
"title": "Authors",
"type": "array"
},
"attachments": {
"anyOf": [
{
"$ref": "#/$defs/AttachmentsDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "file and other attachments"
},
"cite": {
"description": "citations",
"items": {
"$ref": "#/$defs/CiteEntry"
},
"title": "Cite",
"type": "array"
},
"config": {
"additionalProperties": {
"$ref": "#/$defs/YamlValue"
},
"description": "A field for custom configuration that can contain any keys not present in the RDF spec.\nThis means you should not store, for example, a github repo URL in `config` since we already have the\n`git_repo` field defined in the spec.\nKeys in `config` may be very specific to a tool or consumer software. To avoid conflicting definitions,\nit is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,\nfor example:\n```yaml\nconfig:\n bioimageio: # here is the domain name\n my_custom_key: 3837283\n another_key:\n nested: value\n imagej: # config specific to ImageJ\n macro_dir: path/to/macro/file\n```\nIf possible, please use [`snake_case`](https://en.wikipedia.org/wiki/Snake_case) for keys in `config`.\nYou may want to list linked files additionally under `attachments` to include them when packaging a resource\n(packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains\nan altered rdf.yaml file with local references to the downloaded files)",
"examples": [
{
"bioimageio": {
"another_key": {
"nested": "value"
},
"my_custom_key": 3837283
},
"imagej": {
"macro_dir": "path/to/macro/file"
}
}
],
"title": "Config",
"type": "object"
},
"download_url": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URL to download the resource from (deprecated)",
"title": "Download Url"
},
"git_repo": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "A URL to the Git repository where the resource is being developed.",
"examples": [
"https://github.com/bioimage-io/spec-bioimage-io/tree/main/example_descriptions/models/unet2d_nuclei_broad"
],
"title": "Git Repo"
},
"icon": {
"anyOf": [
{
"maxLength": 2,
"minLength": 1,
"type": "string"
},
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "An icon for illustration",
"title": "Icon"
},
"links": {
"description": "IDs of other bioimage.io resources",
"examples": [
[
"ilastik/ilastik",
"deepimagej/deepimagej",
"zero/notebook_u-net_3d_zerocostdl4mic"
]
],
"items": {
"type": "string"
},
"title": "Links",
"type": "array"
},
"uploader": {
"anyOf": [
{
"$ref": "#/$defs/Uploader"
},
{
"type": "null"
}
],
"default": null,
"description": "The person who uploaded the model (e.g. to bioimage.io)"
},
"maintainers": {
"description": "Maintainers of this resource.\nIf not specified `authors` are maintainers and at least some of them should specify their `github_user` name",
"items": {
"$ref": "#/$defs/Maintainer"
},
"title": "Maintainers",
"type": "array"
},
"rdf_source": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Resource description file (RDF) source; used to keep track of where an rdf.yaml was loaded from.\nDo not set this field in a YAML file.",
"title": "Rdf Source"
},
"tags": {
"description": "Associated tags",
"examples": [
[
"unet2d",
"pytorch",
"nucleus",
"segmentation",
"dsb2018"
]
],
"items": {
"type": "string"
},
"title": "Tags",
"type": "array"
},
"version": {
"anyOf": [
{
"$ref": "#/$defs/Version"
},
{
"type": "null"
}
],
"default": null,
"description": "The version of the resource following SemVer 2.0."
},
"version_number": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "version number (n-th published version, not the semantic version)",
"title": "Version Number"
},
"format_version": {
"const": "0.4.10",
"description": "Version of the bioimage.io model description specification used.\nWhen creating a new model always use the latest micro/patch version described here.\nThe `format_version` is important for any consumer software to understand how to parse the fields.",
"title": "Format Version",
"type": "string"
},
"type": {
"const": "model",
"description": "Specialized resource type 'model'",
"title": "Type",
"type": "string"
},
"id": {
"anyOf": [
{
"minLength": 1,
"title": "ModelId",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "bioimage.io-wide unique resource identifier\nassigned by bioimage.io; version **un**specific.",
"title": "Id"
},
"documentation": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
],
"description": "URL or relative path to a markdown file with additional documentation.\nThe recommended documentation file name is `README.md`. An `.md` suffix is mandatory.\nThe documentation should include a '[#[#]]# Validation' (sub)section\nwith details on how to quantitatively validate the model on unseen data.",
"examples": [
"https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md",
"README.md"
],
"title": "Documentation"
},
"inputs": {
"description": "Describes the input tensors expected by this model.",
"items": {
"$ref": "#/$defs/InputTensorDescr"
},
"minItems": 1,
"title": "Inputs",
"type": "array"
},
"license": {
"anyOf": [
{
"enum": [
"0BSD",
"AAL",
"Abstyles",
"AdaCore-doc",
"Adobe-2006",
"Adobe-Display-PostScript",
"Adobe-Glyph",
"Adobe-Utopia",
"ADSL",
"AFL-1.1",
"AFL-1.2",
"AFL-2.0",
"AFL-2.1",
"AFL-3.0",
"Afmparse",
"AGPL-1.0-only",
"AGPL-1.0-or-later",
"AGPL-3.0-only",
"AGPL-3.0-or-later",
"Aladdin",
"AMDPLPA",
"AML",
"AML-glslang",
"AMPAS",
"ANTLR-PD",
"ANTLR-PD-fallback",
"Apache-1.0",
"Apache-1.1",
"Apache-2.0",
"APAFML",
"APL-1.0",
"App-s2p",
"APSL-1.0",
"APSL-1.1",
"APSL-1.2",
"APSL-2.0",
"Arphic-1999",
"Artistic-1.0",
"Artistic-1.0-cl8",
"Artistic-1.0-Perl",
"Artistic-2.0",
"ASWF-Digital-Assets-1.0",
"ASWF-Digital-Assets-1.1",
"Baekmuk",
"Bahyph",
"Barr",
"bcrypt-Solar-Designer",
"Beerware",
"Bitstream-Charter",
"Bitstream-Vera",
"BitTorrent-1.0",
"BitTorrent-1.1",
"blessing",
"BlueOak-1.0.0",
"Boehm-GC",
"Borceux",
"Brian-Gladman-2-Clause",
"Brian-Gladman-3-Clause",
"BSD-1-Clause",
"BSD-2-Clause",
"BSD-2-Clause-Darwin",
"BSD-2-Clause-Patent",
"BSD-2-Clause-Views",
"BSD-3-Clause",
"BSD-3-Clause-acpica",
"BSD-3-Clause-Attribution",
"BSD-3-Clause-Clear",
"BSD-3-Clause-flex",
"BSD-3-Clause-HP",
"BSD-3-Clause-LBNL",
"BSD-3-Clause-Modification",
"BSD-3-Clause-No-Military-License",
"BSD-3-Clause-No-Nuclear-License",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-No-Nuclear-Warranty",
"BSD-3-Clause-Open-MPI",
"BSD-3-Clause-Sun",
"BSD-4-Clause",
"BSD-4-Clause-Shortened",
"BSD-4-Clause-UC",
"BSD-4.3RENO",
"BSD-4.3TAHOE",
"BSD-Advertising-Acknowledgement",
"BSD-Attribution-HPND-disclaimer",
"BSD-Inferno-Nettverk",
"BSD-Protection",
"BSD-Source-beginning-file",
"BSD-Source-Code",
"BSD-Systemics",
"BSD-Systemics-W3Works",
"BSL-1.0",
"BUSL-1.1",
"bzip2-1.0.6",
"C-UDA-1.0",
"CAL-1.0",
"CAL-1.0-Combined-Work-Exception",
"Caldera",
"Caldera-no-preamble",
"CATOSL-1.1",
"CC-BY-1.0",
"CC-BY-2.0",
"CC-BY-2.5",
"CC-BY-2.5-AU",
"CC-BY-3.0",
"CC-BY-3.0-AT",
"CC-BY-3.0-AU",
"CC-BY-3.0-DE",
"CC-BY-3.0-IGO",
"CC-BY-3.0-NL",
"CC-BY-3.0-US",
"CC-BY-4.0",
"CC-BY-NC-1.0",
"CC-BY-NC-2.0",
"CC-BY-NC-2.5",
"CC-BY-NC-3.0",
"CC-BY-NC-3.0-DE",
"CC-BY-NC-4.0",
"CC-BY-NC-ND-1.0",
"CC-BY-NC-ND-2.0",
"CC-BY-NC-ND-2.5",
"CC-BY-NC-ND-3.0",
"CC-BY-NC-ND-3.0-DE",
"CC-BY-NC-ND-3.0-IGO",
"CC-BY-NC-ND-4.0",
"CC-BY-NC-SA-1.0",
"CC-BY-NC-SA-2.0",
"CC-BY-NC-SA-2.0-DE",
"CC-BY-NC-SA-2.0-FR",
"CC-BY-NC-SA-2.0-UK",
"CC-BY-NC-SA-2.5",
"CC-BY-NC-SA-3.0",
"CC-BY-NC-SA-3.0-DE",
"CC-BY-NC-SA-3.0-IGO",
"CC-BY-NC-SA-4.0",
"CC-BY-ND-1.0",
"CC-BY-ND-2.0",
"CC-BY-ND-2.5",
"CC-BY-ND-3.0",
"CC-BY-ND-3.0-DE",
"CC-BY-ND-4.0",
"CC-BY-SA-1.0",
"CC-BY-SA-2.0",
"CC-BY-SA-2.0-UK",
"CC-BY-SA-2.1-JP",
"CC-BY-SA-2.5",
"CC-BY-SA-3.0",
"CC-BY-SA-3.0-AT",
"CC-BY-SA-3.0-DE",
"CC-BY-SA-3.0-IGO",
"CC-BY-SA-4.0",
"CC-PDDC",
"CC0-1.0",
"CDDL-1.0",
"CDDL-1.1",
"CDL-1.0",
"CDLA-Permissive-1.0",
"CDLA-Permissive-2.0",
"CDLA-Sharing-1.0",
"CECILL-1.0",
"CECILL-1.1",
"CECILL-2.0",
"CECILL-2.1",
"CECILL-B",
"CECILL-C",
"CERN-OHL-1.1",
"CERN-OHL-1.2",
"CERN-OHL-P-2.0",
"CERN-OHL-S-2.0",
"CERN-OHL-W-2.0",
"CFITSIO",
"check-cvs",
"checkmk",
"ClArtistic",
"Clips",
"CMU-Mach",
"CMU-Mach-nodoc",
"CNRI-Jython",
"CNRI-Python",
"CNRI-Python-GPL-Compatible",
"COIL-1.0",
"Community-Spec-1.0",
"Condor-1.1",
"copyleft-next-0.3.0",
"copyleft-next-0.3.1",
"Cornell-Lossless-JPEG",
"CPAL-1.0",
"CPL-1.0",
"CPOL-1.02",
"Cronyx",
"Crossword",
"CrystalStacker",
"CUA-OPL-1.0",
"Cube",
"curl",
"D-FSL-1.0",
"DEC-3-Clause",
"diffmark",
"DL-DE-BY-2.0",
"DL-DE-ZERO-2.0",
"DOC",
"Dotseqn",
"DRL-1.0",
"DRL-1.1",
"DSDP",
"dtoa",
"dvipdfm",
"ECL-1.0",
"ECL-2.0",
"EFL-1.0",
"EFL-2.0",
"eGenix",
"Elastic-2.0",
"Entessa",
"EPICS",
"EPL-1.0",
"EPL-2.0",
"ErlPL-1.1",
"etalab-2.0",
"EUDatagrid",
"EUPL-1.0",
"EUPL-1.1",
"EUPL-1.2",
"Eurosym",
"Fair",
"FBM",
"FDK-AAC",
"Ferguson-Twofish",
"Frameworx-1.0",
"FreeBSD-DOC",
"FreeImage",
"FSFAP",
"FSFAP-no-warranty-disclaimer",
"FSFUL",
"FSFULLR",
"FSFULLRWD",
"FTL",
"Furuseth",
"fwlw",
"GCR-docs",
"GD",
"GFDL-1.1-invariants-only",
"GFDL-1.1-invariants-or-later",
"GFDL-1.1-no-invariants-only",
"GFDL-1.1-no-invariants-or-later",
"GFDL-1.1-only",
"GFDL-1.1-or-later",
"GFDL-1.2-invariants-only",
"GFDL-1.2-invariants-or-later",
"GFDL-1.2-no-invariants-only",
"GFDL-1.2-no-invariants-or-later",
"GFDL-1.2-only",
"GFDL-1.2-or-later",
"GFDL-1.3-invariants-only",
"GFDL-1.3-invariants-or-later",
"GFDL-1.3-no-invariants-only",
"GFDL-1.3-no-invariants-or-later",
"GFDL-1.3-only",
"GFDL-1.3-or-later",
"Giftware",
"GL2PS",
"Glide",
"Glulxe",
"GLWTPL",
"gnuplot",
"GPL-1.0-only",
"GPL-1.0-or-later",
"GPL-2.0-only",
"GPL-2.0-or-later",
"GPL-3.0-only",
"GPL-3.0-or-later",
"Graphics-Gems",
"gSOAP-1.3b",
"gtkbook",
"HaskellReport",
"hdparm",
"Hippocratic-2.1",
"HP-1986",
"HP-1989",
"HPND",
"HPND-DEC",
"HPND-doc",
"HPND-doc-sell",
"HPND-export-US",
"HPND-export-US-modify",
"HPND-Fenneberg-Livingston",
"HPND-INRIA-IMAG",
"HPND-Kevlin-Henney",
"HPND-Markus-Kuhn",
"HPND-MIT-disclaimer",
"HPND-Pbmplus",
"HPND-sell-MIT-disclaimer-xserver",
"HPND-sell-regexpr",
"HPND-sell-variant",
"HPND-sell-variant-MIT-disclaimer",
"HPND-UC",
"HTMLTIDY",
"IBM-pibs",
"ICU",
"IEC-Code-Components-EULA",
"IJG",
"IJG-short",
"ImageMagick",
"iMatix",
"Imlib2",
"Info-ZIP",
"Inner-Net-2.0",
"Intel",
"Intel-ACPI",
"Interbase-1.0",
"IPA",
"IPL-1.0",
"ISC",
"ISC-Veillard",
"Jam",
"JasPer-2.0",
"JPL-image",
"JPNIC",
"JSON",
"Kastrup",
"Kazlib",
"Knuth-CTAN",
"LAL-1.2",
"LAL-1.3",
"Latex2e",
"Latex2e-translated-notice",
"Leptonica",
"LGPL-2.0-only",
"LGPL-2.0-or-later",
"LGPL-2.1-only",
"LGPL-2.1-or-later",
"LGPL-3.0-only",
"LGPL-3.0-or-later",
"LGPLLR",
"Libpng",
"libpng-2.0",
"libselinux-1.0",
"libtiff",
"libutil-David-Nugent",
"LiLiQ-P-1.1",
"LiLiQ-R-1.1",
"LiLiQ-Rplus-1.1",
"Linux-man-pages-1-para",
"Linux-man-pages-copyleft",
"Linux-man-pages-copyleft-2-para",
"Linux-man-pages-copyleft-var",
"Linux-OpenIB",
"LOOP",
"LPD-document",
"LPL-1.0",
"LPL-1.02",
"LPPL-1.0",
"LPPL-1.1",
"LPPL-1.2",
"LPPL-1.3a",
"LPPL-1.3c",
"lsof",
"Lucida-Bitmap-Fonts",
"LZMA-SDK-9.11-to-9.20",
"LZMA-SDK-9.22",
"Mackerras-3-Clause",
"Mackerras-3-Clause-acknowledgment",
"magaz",
"mailprio",
"MakeIndex",
"Martin-Birgmeier",
"McPhee-slideshow",
"metamail",
"Minpack",
"MirOS",
"MIT",
"MIT-0",
"MIT-advertising",
"MIT-CMU",
"MIT-enna",
"MIT-feh",
"MIT-Festival",
"MIT-Modern-Variant",
"MIT-open-group",
"MIT-testregex",
"MIT-Wu",
"MITNFA",
"MMIXware",
"Motosoto",
"MPEG-SSG",
"mpi-permissive",
"mpich2",
"MPL-1.0",
"MPL-1.1",
"MPL-2.0",
"MPL-2.0-no-copyleft-exception",
"mplus",
"MS-LPL",
"MS-PL",
"MS-RL",
"MTLL",
"MulanPSL-1.0",
"MulanPSL-2.0",
"Multics",
"Mup",
"NAIST-2003",
"NASA-1.3",
"Naumen",
"NBPL-1.0",
"NCGL-UK-2.0",
"NCSA",
"Net-SNMP",
"NetCDF",
"Newsletr",
"NGPL",
"NICTA-1.0",
"NIST-PD",
"NIST-PD-fallback",
"NIST-Software",
"NLOD-1.0",
"NLOD-2.0",
"NLPL",
"Nokia",
"NOSL",
"Noweb",
"NPL-1.0",
"NPL-1.1",
"NPOSL-3.0",
"NRL",
"NTP",
"NTP-0",
"O-UDA-1.0",
"OCCT-PL",
"OCLC-2.0",
"ODbL-1.0",
"ODC-By-1.0",
"OFFIS",
"OFL-1.0",
"OFL-1.0-no-RFN",
"OFL-1.0-RFN",
"OFL-1.1",
"OFL-1.1-no-RFN",
"OFL-1.1-RFN",
"OGC-1.0",
"OGDL-Taiwan-1.0",
"OGL-Canada-2.0",
"OGL-UK-1.0",
"OGL-UK-2.0",
"OGL-UK-3.0",
"OGTSL",
"OLDAP-1.1",
"OLDAP-1.2",
"OLDAP-1.3",
"OLDAP-1.4",
"OLDAP-2.0",
"OLDAP-2.0.1",
"OLDAP-2.1",
"OLDAP-2.2",
"OLDAP-2.2.1",
"OLDAP-2.2.2",
"OLDAP-2.3",
"OLDAP-2.4",
"OLDAP-2.5",
"OLDAP-2.6",
"OLDAP-2.7",
"OLDAP-2.8",
"OLFL-1.3",
"OML",
"OpenPBS-2.3",
"OpenSSL",
"OpenSSL-standalone",
"OpenVision",
"OPL-1.0",
"OPL-UK-3.0",
"OPUBL-1.0",
"OSET-PL-2.1",
"OSL-1.0",
"OSL-1.1",
"OSL-2.0",
"OSL-2.1",
"OSL-3.0",
"PADL",
"Parity-6.0.0",
"Parity-7.0.0",
"PDDL-1.0",
"PHP-3.0",
"PHP-3.01",
"Pixar",
"Plexus",
"pnmstitch",
"PolyForm-Noncommercial-1.0.0",
"PolyForm-Small-Business-1.0.0",
"PostgreSQL",
"PSF-2.0",
"psfrag",
"psutils",
"Python-2.0",
"Python-2.0.1",
"python-ldap",
"Qhull",
"QPL-1.0",
"QPL-1.0-INRIA-2004",
"radvd",
"Rdisc",
"RHeCos-1.1",
"RPL-1.1",
"RPL-1.5",
"RPSL-1.0",
"RSA-MD",
"RSCPL",
"Ruby",
"SAX-PD",
"SAX-PD-2.0",
"Saxpath",
"SCEA",
"SchemeReport",
"Sendmail",
"Sendmail-8.23",
"SGI-B-1.0",
"SGI-B-1.1",
"SGI-B-2.0",
"SGI-OpenGL",
"SGP4",
"SHL-0.5",
"SHL-0.51",
"SimPL-2.0",
"SISSL",
"SISSL-1.2",
"SL",
"Sleepycat",
"SMLNJ",
"SMPPL",
"SNIA",
"snprintf",
"softSurfer",
"Soundex",
"Spencer-86",
"Spencer-94",
"Spencer-99",
"SPL-1.0",
"ssh-keyscan",
"SSH-OpenSSH",
"SSH-short",
"SSLeay-standalone",
"SSPL-1.0",
"SugarCRM-1.1.3",
"Sun-PPP",
"SunPro",
"SWL",
"swrule",
"Symlinks",
"TAPR-OHL-1.0",
"TCL",
"TCP-wrappers",
"TermReadKey",
"TGPPL-1.0",
"TMate",
"TORQUE-1.1",
"TOSL",
"TPDL",
"TPL-1.0",
"TTWL",
"TTYP0",
"TU-Berlin-1.0",
"TU-Berlin-2.0",
"UCAR",
"UCL-1.0",
"ulem",
"UMich-Merit",
"Unicode-3.0",
"Unicode-DFS-2015",
"Unicode-DFS-2016",
"Unicode-TOU",
"UnixCrypt",
"Unlicense",
"UPL-1.0",
"URT-RLE",
"Vim",
"VOSTROM",
"VSL-1.0",
"W3C",
"W3C-19980720",
"W3C-20150513",
"w3m",
"Watcom-1.0",
"Widget-Workshop",
"Wsuipa",
"WTFPL",
"X11",
"X11-distribute-modifications-variant",
"Xdebug-1.03",
"Xerox",
"Xfig",
"XFree86-1.1",
"xinetd",
"xkeyboard-config-Zinoviev",
"xlock",
"Xnet",
"xpp",
"XSkat",
"YPL-1.0",
"YPL-1.1",
"Zed",
"Zeeff",
"Zend-2.0",
"Zimbra-1.3",
"Zimbra-1.4",
"Zlib",
"zlib-acknowledgement",
"ZPL-1.1",
"ZPL-2.0",
"ZPL-2.1"
],
"title": "LicenseId",
"type": "string"
},
{
"type": "string"
}
],
"description": "A [SPDX license identifier](https://spdx.org/licenses/).\nWe do notsupport custom license beyond the SPDX license list, if you need that please\n[open a GitHub issue](https://github.com/bioimage-io/spec-bioimage-io/issues/new/choose\n) to discuss your intentions with the community.",
"examples": [
"CC0-1.0",
"MIT",
"BSD-2-Clause"
],
"title": "License"
},
"outputs": {
"description": "Describes the output tensors.",
"items": {
"$ref": "#/$defs/OutputTensorDescr"
},
"minItems": 1,
"title": "Outputs",
"type": "array"
},
"packaged_by": {
"description": "The persons that have packaged and uploaded this model.\nOnly required if those persons differ from the `authors`.",
"items": {
"$ref": "#/$defs/Author"
},
"title": "Packaged By",
"type": "array"
},
"parent": {
"anyOf": [
{
"$ref": "#/$defs/LinkedModel"
},
{
"type": "null"
}
],
"default": null,
"description": "The model from which this model is derived, e.g. by fine-tuning the weights."
},
"run_mode": {
"anyOf": [
{
"$ref": "#/$defs/RunMode"
},
{
"type": "null"
}
],
"default": null,
"description": "Custom run mode for this model: for more complex prediction procedures like test time\ndata augmentation that currently cannot be expressed in the specification.\nNo standard run modes are defined yet."
},
"sample_inputs": {
"description": "URLs/relative paths to sample inputs to illustrate possible inputs for the model,\nfor example stored as PNG or TIFF images.\nThe sample files primarily serve to inform a human user about an example use case",
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Sample Inputs",
"type": "array"
},
"sample_outputs": {
"description": "URLs/relative paths to sample outputs corresponding to the `sample_inputs`.",
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"title": "Sample Outputs",
"type": "array"
},
"test_inputs": {
"description": "Test input tensors compatible with the `inputs` description for a **single test case**.\nThis means if your model has more than one input, you should provide one URL/relative path for each input.\nEach test input should be a file with an ndarray in\n[numpy.lib file format](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html#module-numpy.lib.format).\nThe extension must be '.npy'.",
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"minItems": 1,
"title": "Test Inputs",
"type": "array"
},
"test_outputs": {
"description": "Analog to `test_inputs`.",
"items": {
"anyOf": [
{
"description": "A URL with the HTTP or HTTPS scheme.",
"format": "uri",
"maxLength": 2083,
"minLength": 1,
"title": "HttpUrl",
"type": "string"
},
{
"$ref": "#/$defs/RelativeFilePath"
},
{
"format": "file-path",
"title": "FilePath",
"type": "string"
}
]
},
"minItems": 1,
"title": "Test Outputs",
"type": "array"
},
"timestamp": {
"$ref": "#/$defs/Datetime"
},
"training_data": {
"anyOf": [
{
"$ref": "#/$defs/LinkedDataset"
},
{
"$ref": "#/$defs/DatasetDescr"
},
{
"type": "null"
}
],
"default": null,
"description": "The dataset used to train this model",
"title": "Training Data"
},
"weights": {
"$ref": "#/$defs/WeightsDescr",
"description": "The weights for this model.\nWeights can be given for different formats, but should otherwise be equivalent.\nThe available weight formats determine which consumers can use this model."
}
},
"required": [
"name",
"description",
"authors",
"format_version",
"type",
"documentation",
"inputs",
"license",
"outputs",
"test_inputs",
"test_outputs",
"timestamp",
"weights"
],
"title": "model 0.4.10",
"type": "object"
}
Fields:
-
_validation_summary(Optional[ValidationSummary]) -
_root(Union[RootHttpUrl, DirectoryPath, ZipFile]) -
_file_name(Optional[FileName]) -
description(str) -
covers(List[FileSource_cover]) -
id_emoji(Optional[str]) -
attachments(Optional[AttachmentsDescr]) -
cite(List[CiteEntry]) -
config(Dict[str, YamlValue]) -
download_url(Optional[HttpUrl]) -
git_repo(Optional[str]) -
icon(Union[str, FileSource, None]) -
links(List[str]) -
uploader(Optional[Uploader]) -
maintainers(List[Maintainer]) -
rdf_source(Optional[FileSource]) -
tags(List[str]) -
version(Optional[Version]) -
version_number(Optional[int]) -
format_version(Literal['0.4.10']) -
type(Literal['model']) -
id(Optional[ModelId]) -
authors(NotEmpty[List[Author]]) -
documentation(FileSource_) -
inputs(NotEmpty[List[InputTensorDescr]]) -
license(Union[LicenseId, str]) -
name(str) -
outputs(NotEmpty[List[OutputTensorDescr]]) -
packaged_by(List[Author]) -
parent(Optional[LinkedModel]) -
run_mode(Optional[RunMode]) -
sample_inputs(List[FileSource_]) -
sample_outputs(List[FileSource_]) -
test_inputs(NotEmpty[List[FileSource_]]) -
test_outputs(NotEmpty[List[FileSource_]]) -
timestamp(Datetime) -
training_data(Union[LinkedDataset, DatasetDescr, None]) -
weights(WeightsDescr)
Validators:
-
unique_tensor_descr_names→inputs,outputs -
unique_io_names -
minimum_shape2valid_output -
validate_tensor_references_in_inputs -
validate_tensor_references_in_outputs -
ignore_url_parent→parent -
_convert_from_older_format
attachments
pydantic-field
¤
attachments: Optional[AttachmentsDescr] = None
file and other attachments
authors
pydantic-field
¤
The authors are the creators of the model RDF and the primary points of contact.
config
pydantic-field
¤
config: Dict[str, YamlValue]
A field for custom configuration that can contain any keys not present in the RDF spec.
This means you should not store, for example, a github repo URL in config since we already have the
git_repo field defined in the spec.
Keys in config may be very specific to a tool or consumer software. To avoid conflicting definitions,
it is recommended to wrap added configuration into a sub-field named with the specific domain or tool name,
for example:
config:
bioimageio: # here is the domain name
my_custom_key: 3837283
another_key:
nested: value
imagej: # config specific to ImageJ
macro_dir: path/to/macro/file
snake_case for keys in config.
You may want to list linked files additionally under attachments to include them when packaging a resource
(packaging a resource means downloading/copying important linked files and creating a ZIP archive that contains
an altered rdf.yaml file with local references to the downloaded files)
covers
pydantic-field
¤
covers: List[FileSource_cover]
Cover images. Please use an image smaller than 500KB and an aspect ratio width to height of 2:1.
documentation
pydantic-field
¤
documentation: FileSource_
URL or relative path to a markdown file with additional documentation.
The recommended documentation file name is README.md. An .md suffix is mandatory.
The documentation should include a '[#[#]]# Validation' (sub)section
with details on how to quantitatively validate the model on unseen data.
download_url
pydantic-field
¤
download_url: Optional[HttpUrl] = None
URL to download the resource from (deprecated)
file_name
property
¤
file_name: Optional[FileName]
File name of the bioimageio.yaml file the description was loaded from.
git_repo
pydantic-field
¤
git_repo: Optional[str] = None
A URL to the Git repository where the resource is being developed.
id
pydantic-field
¤
id: Optional[ModelId] = None
bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.
implemented_format_version
class-attribute
¤
implemented_format_version: Literal['0.4.10'] = '0.4.10'
implemented_format_version_tuple
class-attribute
¤
implemented_format_version_tuple: Tuple[int, int, int]
inputs
pydantic-field
¤
inputs: NotEmpty[List[InputTensorDescr]]
Describes the input tensors expected by this model.
license
pydantic-field
¤
license: Union[LicenseId, str]
A SPDX license identifier. We do notsupport custom license beyond the SPDX license list, if you need that please open a GitHub issue to discuss your intentions with the community.
maintainers
pydantic-field
¤
maintainers: List[Maintainer]
Maintainers of this resource.
If not specified authors are maintainers and at least some of them should specify their github_user name
name
pydantic-field
¤
name: str
A human-readable name of this model. It should be no longer than 64 characters and only contain letter, number, underscore, minus or space characters.
packaged_by
pydantic-field
¤
packaged_by: List[Author]
The persons that have packaged and uploaded this model.
Only required if those persons differ from the authors.
parent
pydantic-field
¤
parent: Optional[LinkedModel] = None
The model from which this model is derived, e.g. by fine-tuning the weights.
rdf_source
pydantic-field
¤
rdf_source: Optional[FileSource] = None
Resource description file (RDF) source; used to keep track of where an rdf.yaml was loaded from. Do not set this field in a YAML file.
root
property
¤
root: Union[RootHttpUrl, DirectoryPath, ZipFile]
The URL/Path prefix to resolve any relative paths with.
run_mode
pydantic-field
¤
run_mode: Optional[RunMode] = None
Custom run mode for this model: for more complex prediction procedures like test time data augmentation that currently cannot be expressed in the specification. No standard run modes are defined yet.
sample_inputs
pydantic-field
¤
sample_inputs: List[FileSource_]
URLs/relative paths to sample inputs to illustrate possible inputs for the model, for example stored as PNG or TIFF images. The sample files primarily serve to inform a human user about an example use case
sample_outputs
pydantic-field
¤
sample_outputs: List[FileSource_]
URLs/relative paths to sample outputs corresponding to the sample_inputs.
test_inputs
pydantic-field
¤
test_inputs: NotEmpty[List[FileSource_]]
Test input tensors compatible with the inputs description for a single test case.
This means if your model has more than one input, you should provide one URL/relative path for each input.
Each test input should be a file with an ndarray in
numpy.lib file format.
The extension must be '.npy'.
training_data
pydantic-field
¤
training_data: Union[LinkedDataset, DatasetDescr, None] = (
None
)
The dataset used to train this model
uploader
pydantic-field
¤
uploader: Optional[Uploader] = None
The person who uploaded the model (e.g. to bioimage.io)
version
pydantic-field
¤
version: Optional[Version] = None
The version of the resource following SemVer 2.0.
version_number
pydantic-field
¤
version_number: Optional[int] = None
version number (n-th published version, not the semantic version)
weights
pydantic-field
¤
weights: WeightsDescr
The weights for this model. Weights can be given for different formats, but should otherwise be equivalent. The available weight formats determine which consumers can use this model.
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any)
Source code in src/bioimageio/spec/_internal/common_nodes.py
199 200 201 202 203 204 205 206 207 208 209 210 211 | |
accept_author_strings
classmethod
¤
accept_author_strings(
authors: Union[Any, Sequence[Any]],
) -> Any
we unofficially accept strings as author entries
Source code in src/bioimageio/spec/generic/v0_2.py
245 246 247 248 249 250 251 252 253 254 255 | |
get_input_test_arrays
¤
get_input_test_arrays() -> List[NDArray[Any]]
Source code in src/bioimageio/spec/model/v0_4.py
1352 1353 1354 1355 | |
get_output_test_arrays
¤
get_output_test_arrays() -> List[NDArray[Any]]
Source code in src/bioimageio/spec/model/v0_4.py
1357 1358 1359 1360 | |
get_package_content
¤
get_package_content() -> Dict[
FileName, Union[FileDescr, BioimageioYamlContent]
]
Returns package content without creating the package.
Source code in src/bioimageio/spec/_internal/common_nodes.py
377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 | |
ignore_url_parent
pydantic-validator
¤
ignore_url_parent(parent: Any)
Source code in src/bioimageio/spec/model/v0_4.py
1292 1293 1294 1295 1296 1297 1298 1299 | |
load
classmethod
¤
load(
data: BioimageioYamlContentView,
context: Optional[ValidationContext] = None,
) -> Union[Self, InvalidDescr]
factory method to create a resource description object
Source code in src/bioimageio/spec/_internal/common_nodes.py
213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | |
minimum_shape2valid_output
pydantic-validator
¤
minimum_shape2valid_output() -> Self
Source code in src/bioimageio/spec/model/v0_4.py
1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
package
¤
package(
dest: Optional[
Union[ZipFile, IO[bytes], Path, str]
] = None,
) -> ZipFile
package the described resource as a zip archive
| PARAMETER | DESCRIPTION |
|---|---|
|
(path/bytes stream of) destination zipfile
TYPE:
|
Source code in src/bioimageio/spec/_internal/common_nodes.py
347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 | |
unique_io_names
pydantic-validator
¤
unique_io_names() -> Self
Source code in src/bioimageio/spec/model/v0_4.py
1160 1161 1162 1163 1164 1165 1166 | |
unique_tensor_descr_names
pydantic-validator
¤
unique_tensor_descr_names(
value: Sequence[
Union[InputTensorDescr, OutputTensorDescr]
],
) -> Sequence[Union[InputTensorDescr, OutputTensorDescr]]
Source code in src/bioimageio/spec/model/v0_4.py
1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 | |
validate_tensor_references_in_inputs
pydantic-validator
¤
validate_tensor_references_in_inputs() -> Self
Source code in src/bioimageio/spec/model/v0_4.py
1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 | |
validate_tensor_references_in_outputs
pydantic-validator
¤
validate_tensor_references_in_outputs() -> Self
Source code in src/bioimageio/spec/model/v0_4.py
1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 | |
warn_about_tag_categories
classmethod
¤
warn_about_tag_categories(
value: List[str], info: ValidationInfo
) -> List[str]
Source code in src/bioimageio/spec/generic/v0_2.py
359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 | |
_OutputTensorConv
¤
_OutputTensorConv(src: Type[SRC], tgt: Type[TGT])
Bases: Converter[_OutputTensorDescr_v0_4, OutputTensorDescr, FileSource_, Optional[FileSource_], Mapping[_TensorName_v0_4, Mapping[str, int]]]
flowchart TD
bioimageio.spec.model.v0_5._OutputTensorConv[_OutputTensorConv]
bioimageio.spec._internal.node_converter.Converter[Converter]
bioimageio.spec._internal.node_converter.Converter --> bioimageio.spec.model.v0_5._OutputTensorConv
click bioimageio.spec.model.v0_5._OutputTensorConv href "" "bioimageio.spec.model.v0_5._OutputTensorConv"
click bioimageio.spec._internal.node_converter.Converter href "" "bioimageio.spec._internal.node_converter.Converter"
| METHOD | DESCRIPTION |
|---|---|
convert |
convert |
convert_as_dict |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
src |
TYPE:
|
tgt |
TYPE:
|
Source code in src/bioimageio/spec/_internal/node_converter.py
79 80 81 82 | |
convert
¤
convert(source: SRC, /, *args: Unpack[CArgs]) -> TGT
convert source node
| PARAMETER | DESCRIPTION |
|---|---|
|
A bioimageio description node
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
conversion failed |
Source code in src/bioimageio/spec/_internal/node_converter.py
92 93 94 95 96 97 98 99 100 101 102 | |
convert_as_dict
¤
convert_as_dict(
source: SRC, /, *args: Unpack[CArgs]
) -> Dict[str, Any]
Source code in src/bioimageio/spec/_internal/node_converter.py
104 105 | |
_OutputTensorDescr_v0_4
pydantic-model
¤
Bases: TensorDescrBase
Show JSON schema:
{
"$defs": {
"BinarizeDescr": {
"additionalProperties": false,
"description": "BinarizeDescr the tensor with a fixed `BinarizeKwargs.threshold`.\nValues above the threshold will be set to one, values below the threshold to zero.",
"properties": {
"name": {
"const": "binarize",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/BinarizeKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.BinarizeDescr",
"type": "object"
},
"BinarizeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `BinarizeDescr`",
"properties": {
"threshold": {
"description": "The fixed threshold",
"title": "Threshold",
"type": "number"
}
},
"required": [
"threshold"
],
"title": "model.v0_4.BinarizeKwargs",
"type": "object"
},
"ClipDescr": {
"additionalProperties": false,
"description": "Clip tensor values to a range.\n\nSet tensor values below `ClipKwargs.min` to `ClipKwargs.min`\nand above `ClipKwargs.max` to `ClipKwargs.max`.",
"properties": {
"name": {
"const": "clip",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ClipKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ClipDescr",
"type": "object"
},
"ClipKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ClipDescr`",
"properties": {
"min": {
"description": "minimum value for clipping",
"title": "Min",
"type": "number"
},
"max": {
"description": "maximum value for clipping",
"title": "Max",
"type": "number"
}
},
"required": [
"min",
"max"
],
"title": "model.v0_4.ClipKwargs",
"type": "object"
},
"ImplicitOutputShape": {
"additionalProperties": false,
"description": "Output tensor shape depending on an input tensor shape.\n`shape(output_tensor) = shape(input_tensor) * scale + 2 * offset`",
"properties": {
"reference_tensor": {
"description": "Name of the reference tensor.",
"minLength": 1,
"title": "TensorName",
"type": "string"
},
"scale": {
"description": "output_pix/input_pix for each dimension.\n'null' values indicate new dimensions, whose length is defined by 2*`offset`",
"items": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
]
},
"minItems": 1,
"title": "Scale",
"type": "array"
},
"offset": {
"description": "Position of origin wrt to input.",
"items": {
"anyOf": [
{
"type": "integer"
},
{
"multipleOf": 0.5,
"type": "number"
}
]
},
"minItems": 1,
"title": "Offset",
"type": "array"
}
},
"required": [
"reference_tensor",
"scale",
"offset"
],
"title": "model.v0_4.ImplicitOutputShape",
"type": "object"
},
"ScaleLinearDescr": {
"additionalProperties": false,
"description": "Fixed linear scaling.",
"properties": {
"name": {
"const": "scale_linear",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleLinearKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ScaleLinearDescr",
"type": "object"
},
"ScaleLinearKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleLinearDescr`",
"properties": {
"axes": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to scale jointly.\nFor example xy to scale the two image axes for 2d data jointly.",
"examples": [
"xy"
],
"title": "Axes"
},
"gain": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"type": "array"
}
],
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"type": "array"
}
],
"default": 0.0,
"description": "additive term",
"title": "Offset"
}
},
"title": "model.v0_4.ScaleLinearKwargs",
"type": "object"
},
"ScaleMeanVarianceDescr": {
"additionalProperties": false,
"description": "Scale the tensor s.t. its mean and variance match a reference tensor.",
"properties": {
"name": {
"const": "scale_mean_variance",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleMeanVarianceKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ScaleMeanVarianceDescr",
"type": "object"
},
"ScaleMeanVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleMeanVarianceDescr`",
"properties": {
"mode": {
"description": "Mode for computing mean and variance.\n| mode | description |\n| ----------- | ------------------------------------ |\n| per_dataset | Compute for the entire dataset |\n| per_sample | Compute for each sample individually |",
"enum": [
"per_dataset",
"per_sample"
],
"title": "Mode",
"type": "string"
},
"reference_tensor": {
"description": "Name of tensor to match.",
"minLength": 1,
"title": "TensorName",
"type": "string"
},
"axes": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to scale jointly.\nFor example xy to normalize the two image axes for 2d data jointly.\nDefault: scale all non-batch axes jointly.",
"examples": [
"xy"
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability:\n\"`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"mode",
"reference_tensor"
],
"title": "model.v0_4.ScaleMeanVarianceKwargs",
"type": "object"
},
"ScaleRangeDescr": {
"additionalProperties": false,
"description": "Scale with percentiles.",
"properties": {
"name": {
"const": "scale_range",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleRangeKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ScaleRangeDescr",
"type": "object"
},
"ScaleRangeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleRangeDescr`\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
"properties": {
"mode": {
"description": "Mode for computing percentiles.\n| mode | description |\n| ----------- | ------------------------------------ |\n| per_dataset | compute for the entire dataset |\n| per_sample | compute for each sample individually |",
"enum": [
"per_dataset",
"per_sample"
],
"title": "Mode",
"type": "string"
},
"axes": {
"description": "The subset of axes to normalize jointly.\nFor example xy to normalize the two image axes for 2d data jointly.",
"examples": [
"xy"
],
"title": "Axes",
"type": "string"
},
"min_percentile": {
"anyOf": [
{
"type": "integer"
},
{
"type": "number"
}
],
"default": 0.0,
"description": "The lower percentile used to determine the value to align with zero.",
"ge": 0,
"lt": 100,
"title": "Min Percentile"
},
"max_percentile": {
"anyOf": [
{
"type": "integer"
},
{
"type": "number"
}
],
"default": 100.0,
"description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
"gt": 1,
"le": 100,
"title": "Max Percentile"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
},
"reference_tensor": {
"anyOf": [
{
"minLength": 1,
"title": "TensorName",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Tensor name to compute the percentiles from. Default: The tensor itself.\nFor any tensor in `inputs` only input tensor references are allowed.\nFor a tensor in `outputs` only input tensor refereences are allowed if `mode: per_dataset`",
"title": "Reference Tensor"
}
},
"required": [
"mode",
"axes"
],
"title": "model.v0_4.ScaleRangeKwargs",
"type": "object"
},
"SigmoidDescr": {
"additionalProperties": false,
"description": "The logistic sigmoid funciton, a.k.a. expit function.",
"properties": {
"name": {
"const": "sigmoid",
"title": "Name",
"type": "string"
}
},
"required": [
"name"
],
"title": "model.v0_4.SigmoidDescr",
"type": "object"
},
"ZeroMeanUnitVarianceDescr": {
"additionalProperties": false,
"description": "Subtract mean and divide by variance.",
"properties": {
"name": {
"const": "zero_mean_unit_variance",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ZeroMeanUnitVarianceDescr",
"type": "object"
},
"ZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ZeroMeanUnitVarianceDescr`",
"properties": {
"mode": {
"default": "fixed",
"description": "Mode for computing mean and variance.\n| mode | description |\n| ----------- | ------------------------------------ |\n| fixed | Fixed values for mean and variance |\n| per_dataset | Compute for the entire dataset |\n| per_sample | Compute for each sample individually |",
"enum": [
"fixed",
"per_dataset",
"per_sample"
],
"title": "Mode",
"type": "string"
},
"axes": {
"description": "The subset of axes to normalize jointly.\nFor example `xy` to normalize the two image axes for 2d data jointly.",
"examples": [
"xy"
],
"title": "Axes",
"type": "string"
},
"mean": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The mean value(s) to use for `mode: fixed`.\nFor example `[1.1, 2.2, 3.3]` in the case of a 3 channel image with `axes: xy`.",
"examples": [
[
1.1,
2.2,
3.3
]
],
"title": "Mean"
},
"std": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The standard deviation values to use for `mode: fixed`. Analogous to mean.",
"examples": [
[
0.1,
0.2,
0.3
]
],
"title": "Std"
},
"eps": {
"default": 1e-06,
"description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"axes"
],
"title": "model.v0_4.ZeroMeanUnitVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"name": {
"description": "Tensor name. No duplicates are allowed.",
"minLength": 1,
"title": "TensorName",
"type": "string"
},
"description": {
"default": "",
"title": "Description",
"type": "string"
},
"axes": {
"description": "Axes identifying characters. Same length and order as the axes in `shape`.\n| axis | description |\n| --- | --- |\n| b | batch (groups multiple samples) |\n| i | instance/index/element |\n| t | time |\n| c | channel |\n| z | spatial dimension z |\n| y | spatial dimension y |\n| x | spatial dimension x |",
"title": "Axes",
"type": "string"
},
"data_range": {
"anyOf": [
{
"maxItems": 2,
"minItems": 2,
"prefixItems": [
{
"type": "number"
},
{
"type": "number"
}
],
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Tuple `(minimum, maximum)` specifying the allowed range of the data in this tensor.\nIf not specified, the full data range that can be expressed in `data_type` is allowed.",
"title": "Data Range"
},
"data_type": {
"description": "Data type.\nThe data flow in bioimage.io models is explained\n[in this diagram.](https://docs.google.com/drawings/d/1FTw8-Rn6a6nXdkZ_SkMumtcjvur9mtIhRqLwnKqZNHM/edit).",
"enum": [
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool"
],
"title": "Data Type",
"type": "string"
},
"shape": {
"anyOf": [
{
"items": {
"type": "integer"
},
"type": "array"
},
{
"$ref": "#/$defs/ImplicitOutputShape"
}
],
"description": "Output tensor shape.",
"title": "Shape"
},
"halo": {
"anyOf": [
{
"items": {
"type": "integer"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The `halo` that should be cropped from the output tensor to avoid boundary effects.\nThe `halo` is to be cropped from both sides, i.e. `shape_after_crop = shape - 2 * halo`.\nTo document a `halo` that is already cropped by the model `shape.offset` has to be used instead.",
"title": "Halo"
},
"postprocessing": {
"description": "Description of how this output should be postprocessed.",
"items": {
"discriminator": {
"mapping": {
"binarize": "#/$defs/BinarizeDescr",
"clip": "#/$defs/ClipDescr",
"scale_linear": "#/$defs/ScaleLinearDescr",
"scale_mean_variance": "#/$defs/ScaleMeanVarianceDescr",
"scale_range": "#/$defs/ScaleRangeDescr",
"sigmoid": "#/$defs/SigmoidDescr",
"zero_mean_unit_variance": "#/$defs/ZeroMeanUnitVarianceDescr"
},
"propertyName": "name"
},
"oneOf": [
{
"$ref": "#/$defs/BinarizeDescr"
},
{
"$ref": "#/$defs/ClipDescr"
},
{
"$ref": "#/$defs/ScaleLinearDescr"
},
{
"$ref": "#/$defs/SigmoidDescr"
},
{
"$ref": "#/$defs/ZeroMeanUnitVarianceDescr"
},
{
"$ref": "#/$defs/ScaleRangeDescr"
},
{
"$ref": "#/$defs/ScaleMeanVarianceDescr"
}
]
},
"title": "Postprocessing",
"type": "array"
}
},
"required": [
"name",
"axes",
"data_type",
"shape"
],
"title": "model.v0_4.OutputTensorDescr",
"type": "object"
}
Fields:
-
name(TensorName) -
description(str) -
axes(AxesStr) -
data_range(Optional[Tuple[float, float]]) -
data_type(Literal['float32', 'float64', 'uint8', 'int8', 'uint16', 'int16', 'uint32', 'int32', 'uint64', 'int64', 'bool']) -
shape(Union[Sequence[int], ImplicitOutputShape]) -
halo(Optional[Sequence[int]]) -
postprocessing(List[PostprocessingDescr])
Validators:
axes
pydantic-field
¤
axes: AxesStr
Axes identifying characters. Same length and order as the axes in shape.
| axis | description |
| --- | --- |
| b | batch (groups multiple samples) |
| i | instance/index/element |
| t | time |
| c | channel |
| z | spatial dimension z |
| y | spatial dimension y |
| x | spatial dimension x |
data_range
pydantic-field
¤
data_range: Optional[Tuple[float, float]] = None
Tuple (minimum, maximum) specifying the allowed range of the data in this tensor.
If not specified, the full data range that can be expressed in data_type is allowed.
data_type
pydantic-field
¤
data_type: Literal[
"float32",
"float64",
"uint8",
"int8",
"uint16",
"int16",
"uint32",
"int32",
"uint64",
"int64",
"bool",
]
Data type. The data flow in bioimage.io models is explained in this diagram..
halo
pydantic-field
¤
halo: Optional[Sequence[int]] = None
The halo that should be cropped from the output tensor to avoid boundary effects.
The halo is to be cropped from both sides, i.e. shape_after_crop = shape - 2 * halo.
To document a halo that is already cropped by the model shape.offset has to be used instead.
postprocessing
pydantic-field
¤
postprocessing: List[PostprocessingDescr]
Description of how this output should be postprocessed.
matching_halo_length
pydantic-validator
¤
matching_halo_length() -> Self
Source code in src/bioimageio/spec/model/v0_4.py
1003 1004 1005 1006 1007 1008 1009 1010 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
validate_postprocessing_kwargs
pydantic-validator
¤
validate_postprocessing_kwargs() -> Self
Source code in src/bioimageio/spec/model/v0_4.py
1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 | |
_ParameterizedInputShape_v0_4
pydantic-model
¤
Bases: Node
A sequence of valid shapes given by shape_k = min + k * step for k in {0, 1, ...}.
Show JSON schema:
{
"additionalProperties": false,
"description": "A sequence of valid shapes given by `shape_k = min + k * step for k in {0, 1, ...}`.",
"properties": {
"min": {
"description": "The minimum input shape",
"items": {
"type": "integer"
},
"minItems": 1,
"title": "Min",
"type": "array"
},
"step": {
"description": "The minimum shape change",
"items": {
"type": "integer"
},
"minItems": 1,
"title": "Step",
"type": "array"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_4.ParameterizedInputShape",
"type": "object"
}
Fields:
Validators:
__len__
¤
__len__() -> int
Source code in src/bioimageio/spec/model/v0_4.py
556 557 | |
matching_lengths
pydantic-validator
¤
matching_lengths() -> Self
Source code in src/bioimageio/spec/model/v0_4.py
559 560 561 562 563 564 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
_ScaleLinearDescr_v0_4
pydantic-model
¤
Bases: ProcessingDescrBase
Fixed linear scaling.
Show JSON schema:
{
"$defs": {
"ScaleLinearKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleLinearDescr`",
"properties": {
"axes": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to scale jointly.\nFor example xy to scale the two image axes for 2d data jointly.",
"examples": [
"xy"
],
"title": "Axes"
},
"gain": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"type": "array"
}
],
"default": 1.0,
"description": "multiplicative factor",
"title": "Gain"
},
"offset": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"type": "array"
}
],
"default": 0.0,
"description": "additive term",
"title": "Offset"
}
},
"title": "model.v0_4.ScaleLinearKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Fixed linear scaling.",
"properties": {
"name": {
"const": "scale_linear",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleLinearKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ScaleLinearDescr",
"type": "object"
}
Fields:
-
name(Literal['scale_linear']) -
kwargs(ScaleLinearKwargs)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
_ScaleMeanVarianceDescr_v0_4
pydantic-model
¤
Bases: ProcessingDescrBase
Scale the tensor s.t. its mean and variance match a reference tensor.
Show JSON schema:
{
"$defs": {
"ScaleMeanVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleMeanVarianceDescr`",
"properties": {
"mode": {
"description": "Mode for computing mean and variance.\n| mode | description |\n| ----------- | ------------------------------------ |\n| per_dataset | Compute for the entire dataset |\n| per_sample | Compute for each sample individually |",
"enum": [
"per_dataset",
"per_sample"
],
"title": "Mode",
"type": "string"
},
"reference_tensor": {
"description": "Name of tensor to match.",
"minLength": 1,
"title": "TensorName",
"type": "string"
},
"axes": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The subset of axes to scale jointly.\nFor example xy to normalize the two image axes for 2d data jointly.\nDefault: scale all non-batch axes jointly.",
"examples": [
"xy"
],
"title": "Axes"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability:\n\"`out = (tensor - mean) / (std + eps) * (ref_std + eps) + ref_mean.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"mode",
"reference_tensor"
],
"title": "model.v0_4.ScaleMeanVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Scale the tensor s.t. its mean and variance match a reference tensor.",
"properties": {
"name": {
"const": "scale_mean_variance",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleMeanVarianceKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ScaleMeanVarianceDescr",
"type": "object"
}
Fields:
-
name(Literal['scale_mean_variance']) -
kwargs(ScaleMeanVarianceKwargs)
implemented_name
class-attribute
¤
implemented_name: Literal["scale_mean_variance"] = (
"scale_mean_variance"
)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
_ScaleRangeDescr_v0_4
pydantic-model
¤
Bases: ProcessingDescrBase
Scale with percentiles.
Show JSON schema:
{
"$defs": {
"ScaleRangeKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ScaleRangeDescr`\n\nFor `min_percentile`=0.0 (the default) and `max_percentile`=100 (the default)\nthis processing step normalizes data to the [0, 1] intervall.\nFor other percentiles the normalized values will partially be outside the [0, 1]\nintervall. Use `ScaleRange` followed by `ClipDescr` if you want to limit the\nnormalized values to a range.",
"properties": {
"mode": {
"description": "Mode for computing percentiles.\n| mode | description |\n| ----------- | ------------------------------------ |\n| per_dataset | compute for the entire dataset |\n| per_sample | compute for each sample individually |",
"enum": [
"per_dataset",
"per_sample"
],
"title": "Mode",
"type": "string"
},
"axes": {
"description": "The subset of axes to normalize jointly.\nFor example xy to normalize the two image axes for 2d data jointly.",
"examples": [
"xy"
],
"title": "Axes",
"type": "string"
},
"min_percentile": {
"anyOf": [
{
"type": "integer"
},
{
"type": "number"
}
],
"default": 0.0,
"description": "The lower percentile used to determine the value to align with zero.",
"ge": 0,
"lt": 100,
"title": "Min Percentile"
},
"max_percentile": {
"anyOf": [
{
"type": "integer"
},
{
"type": "number"
}
],
"default": 100.0,
"description": "The upper percentile used to determine the value to align with one.\nHas to be bigger than `min_percentile`.\nThe range is 1 to 100 instead of 0 to 100 to avoid mistakenly\naccepting percentiles specified in the range 0.0 to 1.0.",
"gt": 1,
"le": 100,
"title": "Max Percentile"
},
"eps": {
"default": 1e-06,
"description": "Epsilon for numeric stability.\n`out = (tensor - v_lower) / (v_upper - v_lower + eps)`;\nwith `v_lower,v_upper` values at the respective percentiles.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
},
"reference_tensor": {
"anyOf": [
{
"minLength": 1,
"title": "TensorName",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Tensor name to compute the percentiles from. Default: The tensor itself.\nFor any tensor in `inputs` only input tensor references are allowed.\nFor a tensor in `outputs` only input tensor refereences are allowed if `mode: per_dataset`",
"title": "Reference Tensor"
}
},
"required": [
"mode",
"axes"
],
"title": "model.v0_4.ScaleRangeKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Scale with percentiles.",
"properties": {
"name": {
"const": "scale_range",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ScaleRangeKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ScaleRangeDescr",
"type": "object"
}
Fields:
-
name(Literal['scale_range']) -
kwargs(ScaleRangeKwargs)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
_SigmoidDescr_v0_4
pydantic-model
¤
Bases: ProcessingDescrBase
The logistic sigmoid funciton, a.k.a. expit function.
Show JSON schema:
{
"additionalProperties": false,
"description": "The logistic sigmoid funciton, a.k.a. expit function.",
"properties": {
"name": {
"const": "sigmoid",
"title": "Name",
"type": "string"
}
},
"required": [
"name"
],
"title": "model.v0_4.SigmoidDescr",
"type": "object"
}
Fields:
-
name(Literal['sigmoid'])
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
_TensorName_v0_4
¤
Bases: LowerCaseIdentifier
flowchart TD
bioimageio.spec.model.v0_5._TensorName_v0_4[_TensorName_v0_4]
bioimageio.spec._internal.types.LowerCaseIdentifier[LowerCaseIdentifier]
bioimageio.spec._internal.validated_string.ValidatedString[ValidatedString]
bioimageio.spec._internal.types.LowerCaseIdentifier --> bioimageio.spec.model.v0_5._TensorName_v0_4
bioimageio.spec._internal.validated_string.ValidatedString --> bioimageio.spec._internal.types.LowerCaseIdentifier
click bioimageio.spec.model.v0_5._TensorName_v0_4 href "" "bioimageio.spec.model.v0_5._TensorName_v0_4"
click bioimageio.spec._internal.types.LowerCaseIdentifier href "" "bioimageio.spec._internal.types.LowerCaseIdentifier"
click bioimageio.spec._internal.validated_string.ValidatedString href "" "bioimageio.spec._internal.validated_string.ValidatedString"
| METHOD | DESCRIPTION |
|---|---|
__get_pydantic_core_schema__ |
|
__get_pydantic_json_schema__ |
|
__new__ |
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
root_model |
TYPE:
|
root_model
class-attribute
¤
root_model: Type[RootModel[Any]] = RootModel[
LowerCaseIdentifierAnno
]
__get_pydantic_core_schema__
classmethod
¤
__get_pydantic_core_schema__(
source_type: Any, handler: GetCoreSchemaHandler
) -> CoreSchema
Source code in src/bioimageio/spec/_internal/validated_string.py
29 30 31 32 33 | |
__get_pydantic_json_schema__
classmethod
¤
__get_pydantic_json_schema__(
core_schema: CoreSchema, handler: GetJsonSchemaHandler
) -> JsonSchemaValue
Source code in src/bioimageio/spec/_internal/validated_string.py
35 36 37 38 39 40 41 42 43 44 | |
__new__
¤
__new__(object: object)
Source code in src/bioimageio/spec/_internal/validated_string.py
19 20 21 22 23 | |
_TensorSizes
¤
Bases: NamedTuple
flowchart TD
bioimageio.spec.model.v0_5._TensorSizes[_TensorSizes]
click bioimageio.spec.model.v0_5._TensorSizes href "" "bioimageio.spec.model.v0_5._TensorSizes"
_AxisSizes as nested dicts
| ATTRIBUTE | DESCRIPTION |
|---|---|
inputs |
|
outputs |
TYPE:
|
_WithInputAxisSize
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"$defs": {
"ParameterizedSize": {
"additionalProperties": false,
"description": "Describes a range of valid tensor axis sizes as `size = min + n*step`.\n\n- **min** and **step** are given by the model description.\n- All blocksize paramters n = 0,1,2,... yield a valid `size`.\n- A greater blocksize paramter n = 0,1,2,... results in a greater **size**.\n This allows to adjust the axis size more generically.",
"properties": {
"min": {
"exclusiveMinimum": 0,
"title": "Min",
"type": "integer"
},
"step": {
"exclusiveMinimum": 0,
"title": "Step",
"type": "integer"
}
},
"required": [
"min",
"step"
],
"title": "model.v0_5.ParameterizedSize",
"type": "object"
},
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/ParameterizedSize"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- parameterized series of valid sizes (`ParameterizedSize`)\n- reference to another axis with an optional offset (`SizeReference`)",
"examples": [
10,
{
"min": 32,
"step": 16
},
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
}
},
"required": [
"size"
],
"title": "model.v0_5._WithInputAxisSize",
"type": "object"
}
Fields:
-
size(Union[int, ParameterizedSize, SizeReference])
size
pydantic-field
¤
size: Union[int, ParameterizedSize, SizeReference]
The size/length of this axis can be specified as
- fixed integer
- parameterized series of valid sizes (ParameterizedSize)
- reference to another axis with an optional offset (SizeReference)
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
_WithOutputAxisSize
pydantic-model
¤
Bases: Node
Show JSON schema:
{
"$defs": {
"SizeReference": {
"additionalProperties": false,
"description": "A tensor axis size (extent in pixels/frames) defined in relation to a reference axis.\n\n`axis.size = reference.size * reference.scale / axis.scale + offset`\n\nNote:\n1. The axis and the referenced axis need to have the same unit (or no unit).\n2. Batch axes may not be referenced.\n3. Fractions are rounded down.\n4. If the reference axis is `concatenable` the referencing axis is assumed to be\n `concatenable` as well with the same block order.\n\nExample:\nAn unisotropic input image of w*h=100*49 pixels depicts a phsical space of 200*196mm\u00b2.\nLet's assume that we want to express the image height h in relation to its width w\ninstead of only accepting input images of exactly 100*49 pixels\n(for example to express a range of valid image shapes by parametrizing w, see `ParameterizedSize`).\n\n>>> w = SpaceInputAxis(id=AxisId(\"w\"), size=100, unit=\"millimeter\", scale=2)\n>>> h = SpaceInputAxis(\n... id=AxisId(\"h\"),\n... size=SizeReference(tensor_id=TensorId(\"input\"), axis_id=AxisId(\"w\"), offset=-1),\n... unit=\"millimeter\",\n... scale=4,\n... )\n>>> print(h.size.get_size(h, w))\n49\n\n\u21d2 h = w * w.scale / h.scale + offset = 100 * 2mm / 4mm - 1 = 49",
"properties": {
"tensor_id": {
"description": "tensor id of the reference axis",
"maxLength": 32,
"minLength": 1,
"title": "TensorId",
"type": "string"
},
"axis_id": {
"description": "axis id of the reference axis",
"maxLength": 16,
"minLength": 1,
"title": "AxisId",
"type": "string"
},
"offset": {
"default": 0,
"title": "Offset",
"type": "integer"
}
},
"required": [
"tensor_id",
"axis_id"
],
"title": "model.v0_5.SizeReference",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"size": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"$ref": "#/$defs/SizeReference"
}
],
"description": "The size/length of this axis can be specified as\n- fixed integer\n- reference to another axis with an optional offset (see `SizeReference`)",
"examples": [
10,
{
"axis_id": "a",
"offset": 5,
"tensor_id": "t"
}
],
"title": "Size"
}
},
"required": [
"size"
],
"title": "model.v0_5._WithOutputAxisSize",
"type": "object"
}
Fields:
-
size(Union[int, SizeReference])
size
pydantic-field
¤
size: Union[int, SizeReference]
The size/length of this axis can be specified as
- fixed integer
- reference to another axis with an optional offset (see SizeReference)
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
_ZeroMeanUnitVarianceDescr_v0_4
pydantic-model
¤
Bases: ProcessingDescrBase
Subtract mean and divide by variance.
Show JSON schema:
{
"$defs": {
"ZeroMeanUnitVarianceKwargs": {
"additionalProperties": false,
"description": "key word arguments for `ZeroMeanUnitVarianceDescr`",
"properties": {
"mode": {
"default": "fixed",
"description": "Mode for computing mean and variance.\n| mode | description |\n| ----------- | ------------------------------------ |\n| fixed | Fixed values for mean and variance |\n| per_dataset | Compute for the entire dataset |\n| per_sample | Compute for each sample individually |",
"enum": [
"fixed",
"per_dataset",
"per_sample"
],
"title": "Mode",
"type": "string"
},
"axes": {
"description": "The subset of axes to normalize jointly.\nFor example `xy` to normalize the two image axes for 2d data jointly.",
"examples": [
"xy"
],
"title": "Axes",
"type": "string"
},
"mean": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The mean value(s) to use for `mode: fixed`.\nFor example `[1.1, 2.2, 3.3]` in the case of a 3 channel image with `axes: xy`.",
"examples": [
[
1.1,
2.2,
3.3
]
],
"title": "Mean"
},
"std": {
"anyOf": [
{
"type": "number"
},
{
"items": {
"type": "number"
},
"minItems": 1,
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "The standard deviation values to use for `mode: fixed`. Analogous to mean.",
"examples": [
[
0.1,
0.2,
0.3
]
],
"title": "Std"
},
"eps": {
"default": 1e-06,
"description": "epsilon for numeric stability: `out = (tensor - mean) / (std + eps)`.",
"exclusiveMinimum": 0,
"maximum": 0.1,
"title": "Eps",
"type": "number"
}
},
"required": [
"axes"
],
"title": "model.v0_4.ZeroMeanUnitVarianceKwargs",
"type": "object"
}
},
"additionalProperties": false,
"description": "Subtract mean and divide by variance.",
"properties": {
"name": {
"const": "zero_mean_unit_variance",
"title": "Name",
"type": "string"
},
"kwargs": {
"$ref": "#/$defs/ZeroMeanUnitVarianceKwargs"
}
},
"required": [
"name",
"kwargs"
],
"title": "model.v0_4.ZeroMeanUnitVarianceDescr",
"type": "object"
}
Fields:
-
name(Literal['zero_mean_unit_variance']) -
kwargs(ZeroMeanUnitVarianceKwargs)
implemented_name
class-attribute
¤
implemented_name: Literal["zero_mean_unit_variance"] = (
"zero_mean_unit_variance"
)
__pydantic_init_subclass__
classmethod
¤
__pydantic_init_subclass__(**kwargs: Any) -> None
Source code in src/bioimageio/spec/_internal/common_nodes.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
model_validate
classmethod
¤
model_validate(
obj: Union[Any, Mapping[str, Any]],
*,
strict: Optional[bool] = None,
from_attributes: Optional[bool] = None,
context: Union[
ValidationContext, Mapping[str, Any], None
] = None,
by_alias: bool | None = None,
by_name: bool | None = None,
) -> Self
Validate a pydantic model instance.
| PARAMETER | DESCRIPTION |
|---|---|
|
The object to validate.
TYPE:
|
|
Whether to raise an exception on invalid fields.
TYPE:
|
|
Whether to extract data from object attributes.
TYPE:
|
|
Additional context to pass to the validator.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValidationError
|
If the object failed validation. |
| RETURNS | DESCRIPTION |
|---|---|
Self
|
The validated description instance. |
Source code in src/bioimageio/spec/_internal/node.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |
_axes_letters_to_ids
¤
_axes_letters_to_ids(
axes: Optional[str],
) -> Optional[List[AxisId]]
Source code in src/bioimageio/spec/model/v0_5.py
1870 1871 1872 1873 1874 1875 1876 | |
_convert_proc
¤
_convert_proc(
p: Union[
_PreprocessingDescr_v0_4, _PostprocessingDescr_v0_4
],
tensor_axes: Sequence[str],
) -> Union[PreprocessingDescr, PostprocessingDescr]
Source code in src/bioimageio/spec/model/v0_5.py
1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 | |
_get_complement_v04_axis
¤
_get_complement_v04_axis(
tensor_axes: Sequence[str],
axes: Optional[Sequence[str]],
) -> Optional[AxisId]
Source code in src/bioimageio/spec/model/v0_5.py
1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 | |
_get_halo_axis_discriminator_value
¤
_get_halo_axis_discriminator_value(
v: Any,
) -> Literal["with_halo", "wo_halo"]
Source code in src/bioimageio/spec/model/v0_5.py
713 714 715 716 717 | |
_is_batch
¤
_is_batch(a: str) -> bool
Source code in src/bioimageio/spec/model/v0_5.py
256 257 | |
_is_not_batch
¤
_is_not_batch(a: str) -> bool
Source code in src/bioimageio/spec/model/v0_5.py
260 261 | |
_normalize_axis_id
¤
_normalize_axis_id(a: str)
Source code in src/bioimageio/spec/model/v0_5.py
236 237 238 239 240 241 242 243 | |
convert_axes
¤
convert_axes(
axes: str,
*,
shape: Union[
Sequence[int],
_ParameterizedInputShape_v0_4,
_ImplicitOutputShape_v0_4,
],
tensor_type: Literal["input", "output"],
halo: Optional[Sequence[int]],
size_refs: Mapping[_TensorName_v0_4, Mapping[str, int]],
)
Source code in src/bioimageio/spec/model/v0_5.py
1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 | |
extract_file_name
¤
extract_file_name(
src: Union[
pydantic.HttpUrl,
RootHttpUrl,
PurePath,
RelativeFilePath,
ZipPath,
FileDescr,
],
) -> FileName
Source code in src/bioimageio/spec/_internal/io.py
811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 | |
generate_covers
¤
generate_covers(
inputs: Sequence[Tuple[InputTensorDescr, NDArray[Any]]],
outputs: Sequence[
Tuple[OutputTensorDescr, NDArray[Any]]
],
) -> List[Path]
Source code in src/bioimageio/spec/model/v0_5.py
3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 | |
get_reader
¤
get_reader(
source: Union[PermissiveFileSource, FileDescr, ZipPath],
/,
progressbar: Union[
Progressbar, Callable[[], Progressbar], bool, None
] = None,
**kwargs: Unpack[HashKwargs],
) -> BytesReader
Open a file source (download if needed)
Source code in src/bioimageio/spec/_internal/io.py
630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 | |
get_validation_context
¤
get_validation_context(
default: Optional[ValidationContext] = None,
) -> ValidationContext
Get the currently active validation context (or a default)
Source code in src/bioimageio/spec/_internal/validation_context.py
209 210 211 212 213 | |
is_dict
¤
is_dict(v: Any) -> TypeGuard[Dict[Any, Any]]
to avoid Dict[Unknown, Unknown]
Source code in src/bioimageio/spec/_internal/type_guards.py
12 13 14 | |
is_sequence
¤
is_sequence(v: Any) -> TypeGuard[Sequence[Any]]
to avoid Sequence[Unknown]
Source code in src/bioimageio/spec/_internal/type_guards.py
34 35 36 | |
issue_warning
¤
issue_warning(
msg: LiteralString,
*,
value: Any,
severity: WarningSeverity = WARNING,
msg_context: Optional[Dict[str, Any]] = None,
field: Optional[str] = None,
log_depth: int = 1,
)
Source code in src/bioimageio/spec/_internal/field_warning.py
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 | |
load_array
¤
load_array(
source: Union[FileSource, FileDescr, ZipPath],
) -> NDArray[Any]
Source code in src/bioimageio/spec/_internal/io_utils.py
344 345 346 347 348 349 | |
package_file_descr_serializer
¤
package_file_descr_serializer(
value: FileDescr,
handler: SerializerFunctionWrapHandler,
info: SerializationInfo,
)
Source code in src/bioimageio/spec/_internal/io_packaging.py
45 46 47 48 49 50 51 52 53 | |
package_weights
¤
package_weights(
value: Node,
handler: SerializerFunctionWrapHandler,
info: SerializationInfo,
)
Source code in src/bioimageio/spec/model/v0_4.py
1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 | |
validate_tensors
¤
validate_tensors(
tensors: Mapping[
TensorId, Tuple[TensorDescr, Optional[NDArray[Any]]]
],
tensor_origin: Literal["test_tensor"],
)
Source code in src/bioimageio/spec/model/v0_5.py
2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 | |
warn
¤
warn(
typ: Union[AnnotationMetaData, Any],
msg: LiteralString,
severity: WarningSeverity = WARNING,
)
treat a type or its annotation metadata as a warning condition
Source code in src/bioimageio/spec/_internal/field_warning.py
30 31 32 33 34 35 36 37 38 39 40 41 42 43 | |
wo_special_file_name
¤
wo_special_file_name(src: F) -> F
Source code in src/bioimageio/spec/_internal/io.py
343 344 345 346 347 348 349 350 | |