Skip to content

prediction ¤

Functions:

Name Description
predict

Run prediction for a single set of input(s) with a bioimage.io model

predict_many

Run prediction for a multiple sets of inputs with a bioimage.io model

predict ¤

predict(*, model: Union[PermissiveFileSource, v0_4.ModelDescr, v0_5.ModelDescr, PredictionPipeline], inputs: Union[Sample, PerMember[TensorSource], TensorSource], sample_id: Hashable = 'sample', blocksize_parameter: Optional[BlocksizeParameter] = None, input_block_shape: Optional[Mapping[MemberId, Mapping[AxisId, int]]] = None, skip_preprocessing: bool = False, skip_postprocessing: bool = False, save_output_path: Optional[Union[Path, str]] = None) -> Sample

Run prediction for a single set of input(s) with a bioimage.io model

Parameters:

Name Type Description Default

model ¤

Union[PermissiveFileSource, v0_4.ModelDescr, v0_5.ModelDescr, PredictionPipeline]

Model to predict with. May be given as RDF source, model description or prediction pipeline.

required

inputs ¤

Union[Sample, PerMember[TensorSource], TensorSource]

the input sample or the named input(s) for this model as a dictionary

required

sample_id ¤

Hashable

the sample id. The sample_id is used to format save_output_path and to distinguish sample specific log messages.

'sample'

blocksize_parameter ¤

Optional[BlocksizeParameter]

(optional) Tile the input into blocks parametrized by blocksize_parameter according to any parametrized axis sizes defined by the model. See bioimageio.spec.model.v0_5.ParameterizedSize for details. Note: For a predetermined, fixed block shape use input_block_shape.

None

input_block_shape ¤

Optional[Mapping[MemberId, Mapping[AxisId, int]]]

(optional) Tile the input sample tensors into blocks. Note: Use blocksize_parameter for a parameterized block shape to run prediction independent of the exact block shape.

None

skip_preprocessing ¤

bool

Flag to skip the model's preprocessing.

False

skip_postprocessing ¤

bool

Flag to skip the model's postprocessing.

False

save_output_path ¤

Optional[Union[Path, str]]

A path with to save the output to. M Must contain: - {output_id} (or {member_id}) if the model has multiple output tensors May contain: - {sample_id} to avoid overwriting recurrent calls

None
Source code in src/bioimageio/core/prediction.py
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
def predict(
    *,
    model: Union[
        PermissiveFileSource, v0_4.ModelDescr, v0_5.ModelDescr, PredictionPipeline
    ],
    inputs: Union[Sample, PerMember[TensorSource], TensorSource],
    sample_id: Hashable = "sample",
    blocksize_parameter: Optional[BlocksizeParameter] = None,
    input_block_shape: Optional[Mapping[MemberId, Mapping[AxisId, int]]] = None,
    skip_preprocessing: bool = False,
    skip_postprocessing: bool = False,
    save_output_path: Optional[Union[Path, str]] = None,
) -> Sample:
    """Run prediction for a single set of input(s) with a bioimage.io model

    Args:
        model: Model to predict with.
            May be given as RDF source, model description or prediction pipeline.
        inputs: the input sample or the named input(s) for this model as a dictionary
        sample_id: the sample id.
            The **sample_id** is used to format **save_output_path**
            and to distinguish sample specific log messages.
        blocksize_parameter: (optional) Tile the input into blocks parametrized by
            **blocksize_parameter** according to any parametrized axis sizes defined
            by the **model**.
            See `bioimageio.spec.model.v0_5.ParameterizedSize` for details.
            Note: For a predetermined, fixed block shape use **input_block_shape**.
        input_block_shape: (optional) Tile the input sample tensors into blocks.
            Note: Use **blocksize_parameter** for a parameterized block shape to
                run prediction independent of the exact block shape.
        skip_preprocessing: Flag to skip the model's preprocessing.
        skip_postprocessing: Flag to skip the model's postprocessing.
        save_output_path: A path with to save the output to. M
            Must contain:
            - `{output_id}` (or `{member_id}`) if the model has multiple output tensors
            May contain:
            - `{sample_id}` to avoid overwriting recurrent calls
    """
    if isinstance(model, PredictionPipeline):
        pp = model
        model = pp.model_description
    else:
        if not isinstance(model, (v0_4.ModelDescr, v0_5.ModelDescr)):
            loaded = load_description(model)
            if not isinstance(loaded, (v0_4.ModelDescr, v0_5.ModelDescr)):
                raise ValueError(f"expected model description, but got {loaded}")
            model = loaded

        pp = create_prediction_pipeline(model)

    if save_output_path is not None:
        if (
            "{output_id}" not in str(save_output_path)
            and "{member_id}" not in str(save_output_path)
            and len(model.outputs) > 1
        ):
            raise ValueError(
                f"Missing `{{output_id}}` in save_output_path={save_output_path} to "
                + "distinguish model outputs "
                + str([get_member_id(d) for d in model.outputs])
            )

    if isinstance(inputs, Sample):
        sample = inputs
    else:
        sample = create_sample_for_model(
            pp.model_description, inputs=inputs, sample_id=sample_id
        )

    if input_block_shape is not None:
        if blocksize_parameter is not None:
            logger.warning(
                "ignoring blocksize_parameter={} in favor of input_block_shape={}",
                blocksize_parameter,
                input_block_shape,
            )

        output = pp.predict_sample_with_fixed_blocking(
            sample,
            input_block_shape=input_block_shape,
            skip_preprocessing=skip_preprocessing,
            skip_postprocessing=skip_postprocessing,
        )
    elif blocksize_parameter is not None:
        output = pp.predict_sample_with_blocking(
            sample,
            skip_preprocessing=skip_preprocessing,
            skip_postprocessing=skip_postprocessing,
            ns=blocksize_parameter,
        )
    else:
        output = pp.predict_sample_without_blocking(
            sample,
            skip_preprocessing=skip_preprocessing,
            skip_postprocessing=skip_postprocessing,
        )
    if save_output_path:
        save_sample(save_output_path, output)

    return output

predict_many ¤

predict_many(*, model: Union[PermissiveFileSource, v0_4.ModelDescr, v0_5.ModelDescr, PredictionPipeline], inputs: Union[Iterable[PerMember[TensorSource]], Iterable[TensorSource]], sample_id: str = 'sample{i:03}', blocksize_parameter: Optional[Union[v0_5.ParameterizedSize_N, Mapping[Tuple[MemberId, AxisId], v0_5.ParameterizedSize_N]]] = None, skip_preprocessing: bool = False, skip_postprocessing: bool = False, save_output_path: Optional[Union[Path, str]] = None) -> Iterator[Sample]

Run prediction for a multiple sets of inputs with a bioimage.io model

Parameters:

Name Type Description Default

model ¤

Union[PermissiveFileSource, v0_4.ModelDescr, v0_5.ModelDescr, PredictionPipeline]

Model to predict with. May be given as RDF source, model description or prediction pipeline.

required

inputs ¤

Union[Iterable[PerMember[TensorSource]], Iterable[TensorSource]]

An iterable of the named input(s) for this model as a dictionary.

required

sample_id ¤

str

The sample id. note: {i} will be formatted as the i-th sample. If {i} (or {i:) is not present and inputs is not an iterable {i:03} is appended.

'sample{i:03}'

blocksize_parameter ¤

Optional[Union[v0_5.ParameterizedSize_N, Mapping[Tuple[MemberId, AxisId], v0_5.ParameterizedSize_N]]]

(optional) Tile the input into blocks parametrized by blocksize according to any parametrized axis sizes defined in the model RDF.

None

skip_preprocessing ¤

bool

Flag to skip the model's preprocessing.

False

skip_postprocessing ¤

bool

Flag to skip the model's postprocessing.

False

save_output_path ¤

Optional[Union[Path, str]]

A path to save the output to. Must contain: - {sample_id} to differentiate predicted samples - {output_id} (or {member_id}) if the model has multiple outputs

None
Source code in src/bioimageio/core/prediction.py
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
def predict_many(
    *,
    model: Union[
        PermissiveFileSource, v0_4.ModelDescr, v0_5.ModelDescr, PredictionPipeline
    ],
    inputs: Union[Iterable[PerMember[TensorSource]], Iterable[TensorSource]],
    sample_id: str = "sample{i:03}",
    blocksize_parameter: Optional[
        Union[
            v0_5.ParameterizedSize_N,
            Mapping[Tuple[MemberId, AxisId], v0_5.ParameterizedSize_N],
        ]
    ] = None,
    skip_preprocessing: bool = False,
    skip_postprocessing: bool = False,
    save_output_path: Optional[Union[Path, str]] = None,
) -> Iterator[Sample]:
    """Run prediction for a multiple sets of inputs with a bioimage.io model

    Args:
        model: Model to predict with.
            May be given as RDF source, model description or prediction pipeline.
        inputs: An iterable of the named input(s) for this model as a dictionary.
        sample_id: The sample id.
            note: `{i}` will be formatted as the i-th sample.
            If `{i}` (or `{i:`) is not present and `inputs` is not an iterable `{i:03}`
            is appended.
        blocksize_parameter: (optional) Tile the input into blocks parametrized by
            blocksize according to any parametrized axis sizes defined in the model RDF.
        skip_preprocessing: Flag to skip the model's preprocessing.
        skip_postprocessing: Flag to skip the model's postprocessing.
        save_output_path: A path to save the output to.
            Must contain:
            - `{sample_id}` to differentiate predicted samples
            - `{output_id}` (or `{member_id}`) if the model has multiple outputs
    """
    if save_output_path is not None and "{sample_id}" not in str(save_output_path):
        raise ValueError(
            f"Missing `{{sample_id}}` in save_output_path={save_output_path}"
            + " to differentiate predicted samples."
        )

    if isinstance(model, PredictionPipeline):
        pp = model
    else:
        if not isinstance(model, (v0_4.ModelDescr, v0_5.ModelDescr)):
            loaded = load_description(model)
            if not isinstance(loaded, (v0_4.ModelDescr, v0_5.ModelDescr)):
                raise ValueError(f"expected model description, but got {loaded}")
            model = loaded

        pp = create_prediction_pipeline(model)

    if not isinstance(inputs, collections.abc.Mapping):
        if "{i}" not in sample_id and "{i:" not in sample_id:
            sample_id += "{i:03}"

        total = len(inputs) if isinstance(inputs, collections.abc.Sized) else None

        for i, ipts in tqdm(enumerate(inputs), total=total):
            yield predict(
                model=pp,
                inputs=ipts,
                sample_id=sample_id.format(i=i),
                blocksize_parameter=blocksize_parameter,
                skip_preprocessing=skip_preprocessing,
                skip_postprocessing=skip_postprocessing,
                save_output_path=save_output_path,
            )