Get started
Finding a compatible Python environment¤
For model inference you need a Python environment with the bioimageio.core package and model (framework) specific dependencies installed.
You may choose to install bioimageio.core alongside (a) suitable framework(s) as optional dependencies with pip, e.g.:
pip install bioimageio.core[pytorch,onnx]
If you are not sure which framework you want to use this model with or the model comes with custom dependencies,
you may choose to have the bioimageio Command Line Interface (CLI) create a suitable environment for a specific model,
using mini-forge (or your favorite conda distribution).
For more details on conda environments, checkout the conda docs.
First create/use any conda environment with bioimageio.core>0.9.6 in it:
conda create -n bioimageio -c conda-forge "bioimageio.core>0.9.6"
conda activate bioimageio
Choose a model source, e.g. a bioimage.io model id like "affable-shark" or a path/url to a bioimageio.yaml (often named rdf.yaml). Then use the bioimageio CLI (or bioimageio.core.test_description) to test the model. Use runtime-env=as-described to test each available weight format in the recommended conda environment that is installed on the fly if necessary:
bioimageio test affable-shark --runtime-env=as-described
The resulting report shows details of the tests performed in the respective conda environments. Inspecting the report, choose a conda environment that passed all tests. The conda environments will be named by the SHA-256 value of the generated conda environment.yaml, e.g. "95227f474ca45b024cf315edb4101e4919199d0a79ef5ff1eb474dc8ce1ec4d8".
You may want to rename or clone your chosen conda environment:
conda activate base
conda rename -n 95227f474ca45b024cf315edb4101e4919199d0a79ef5ff1eb474dc8ce1ec4d8 bioimageio-affable-shark
conda activate bioimageio-affable-shark
Test model+environment¤
Test a bioimageio compatible model, e.g. "affable-shark" in an active Python environment:
bioimageio test affable-shark
0:00:01.723982 | INFO | cli - starting CLI with arguments:
['/opt/hostedtoolcache/Python/3.12.12/x64/bin/bioimageio',
'test',
'affable-shark']
0:00:01.724637 | INFO | cli - loaded CLI input:
{'test': {'source': 'affable-shark'}}
0:00:01.724876 | INFO | cli - executing CLI command:
{'test': {'determinism': 'seed_only',
'devices': None,
'format_version': 'discover',
'runtime_env': 'currently-active',
'source': 'affable-shark',
'stop_early': False,
'summary': ['display'],
'weight_format': 'all',
'working_dir': None}}
0:00:03.346200 | INFO | io_utils - loaded affable-shark from https://hypha.aicell.io/bioimage-io/artifacts/affable-shark/files/rdf.yaml
2026-02-18 11:18:26.039245460 [W:onnxruntime:Default, device_discovery.cc:131 GetPciBusId] Skipping pci_bus_id for PCI path at "/sys/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/5620e0c7-8062-4dce-aeb7-520c7ef76171" because filename ""5620e0c7-8062-4dce-aeb7-520c7ef76171"" dit not match expected pattern of [0-9a-f]+:[0-9a-f]+:[0-9a-f]+[.][0-9a-f]+
/home/runner/work/core-bioimage-io-python/core-bioimage-io-python/src/bioimageio/core/backends/onnx_backend.py:105: UserWarning: Device management is not implemented for onnx yet, cannot unload model
warnings.warn(
0:01:12.961406 | INFO | _resource_tests - Testing inference with 'onnx' for 6 different inputs (B, N): {(1, 2), (2, 1), (1, 1), (2, 0), (2, 2), (1, 0)}
2026-02-18 11:18:27.306447917 [W:onnxruntime:, execution_frame.cc:874 VerifyOutputSizes] Expected shape from model of {1,2,-1,-1} does not match actual shape of {2,2,64,64} for output output0
2026-02-18 11:18:27.418683023 [W:onnxruntime:, execution_frame.cc:874 VerifyOutputSizes] Expected shape from model of {1,2,-1,-1} does not match actual shape of {2,2,80,80} for output output0
2026-02-18 11:18:27.578168946 [W:onnxruntime:, execution_frame.cc:874 VerifyOutputSizes] Expected shape from model of {1,2,-1,-1} does not match actual shape of {2,2,96,96} for output output0
/home/runner/work/core-bioimage-io-python/core-bioimage-io-python/src/bioimageio/core/backends/onnx_backend.py:105: UserWarning: Device management is not implemented for onnx yet, cannot unload model
warnings.warn(
0:01:14.708539 | INFO | _resource_tests - Testing inference with 'pytorch_state_dict' for 6 different inputs (B, N): {(1, 2), (2, 1), (1, 1), (2, 0), (2, 2), (1, 0)}
0:01:16.817541 | INFO | _resource_tests - Testing inference with 'torchscript' for 6 different inputs (B, N): {(1, 2), (2, 1), (1, 1), (2, 0), (2, 2), (1, 0)}
✔️ bioimageio format validation
───────────────────────────────────────────────────────────────────────────────────────────
status passed
source https://hypha.aicell.io/bioimage-io/artifacts/affable-shark/files/rdf.yaml
id affable-shark
version 1.1
applied format model 0.5.7
bioimageio.core 0.9.6
bioimageio.spec 0.5.7.4
Location Details
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
✔️ Successfully created ModelDescr instance.
✔️ bioimageio.spec format validation model 0.5.7
✔️ weights.onnx Reproduce test outputs from test inputs (onnx)
weights.onnx recommended conda environment (Reproduce test outputs from test inputs (onnx))
See Recommended Conda Environment 1 below.
weights.onnx conda compare (Reproduce test outputs from test inputs (onnx))
See Conda Environment Comparison 1 below.
✔️ weights.onnx Run onnx inference for inputs with batch_size: 1 and size parameter n: 0
✔️ weights.onnx Run onnx inference for inputs with batch_size: 1 and size parameter n: 1
✔️ weights.onnx Run onnx inference for inputs with batch_size: 1 and size parameter n: 2
✔️ weights.onnx Run onnx inference for inputs with batch_size: 2 and size parameter n: 0
✔️ weights.onnx Run onnx inference for inputs with batch_size: 2 and size parameter n: 1
✔️ weights.onnx Run onnx inference for inputs with batch_size: 2 and size parameter n: 2
✔️ weights.pytorch_state_dict Reproduce test outputs from test inputs (pytorch_state_dict)
weights.pytorch_state_dict recommended conda environment (Reproduce test outputs from test inputs (pytorch_state_dict))
See Recommended Conda Environment 2 below.
weights.pytorch_state_dict conda compare (Reproduce test outputs from test inputs (pytorch_state_dict))
See Conda Environment Comparison 2 below.
✔️ weights.pytorch_state_dict Run pytorch_state_dict inference for inputs with batch_size: 1 and size parameter n: 0
✔️ weights.pytorch_state_dict Run pytorch_state_dict inference for inputs with batch_size: 1 and size parameter n: 1
✔️ weights.pytorch_state_dict Run pytorch_state_dict inference for inputs with batch_size: 1 and size parameter n: 2
✔️ weights.pytorch_state_dict Run pytorch_state_dict inference for inputs with batch_size: 2 and size parameter n: 0
✔️ weights.pytorch_state_dict Run pytorch_state_dict inference for inputs with batch_size: 2 and size parameter n: 1
✔️ weights.pytorch_state_dict Run pytorch_state_dict inference for inputs with batch_size: 2 and size parameter n: 2
✔️ weights.torchscript Reproduce test outputs from test inputs (torchscript)
weights.torchscript recommended conda environment (Reproduce test outputs from test inputs (torchscript))
See Recommended Conda Environment 3 below.
weights.torchscript conda compare (Reproduce test outputs from test inputs (torchscript))
See Conda Environment Comparison 3 below.
✔️ weights.torchscript Run torchscript inference for inputs with batch_size: 1 and size parameter n: 0
✔️ weights.torchscript Run torchscript inference for inputs with batch_size: 1 and size parameter n: 1
✔️ weights.torchscript Run torchscript inference for inputs with batch_size: 1 and size parameter n: 2
✔️ weights.torchscript Run torchscript inference for inputs with batch_size: 2 and size parameter n: 0
✔️ weights.torchscript Run torchscript inference for inputs with batch_size: 2 and size parameter n: 1
✔️ weights.torchscript Run torchscript inference for inputs with batch_size: 2 and size parameter n: 2
Recommended Conda Environment 1
%YAML 1.2
---
channels:
- conda-forge
- nodefaults
dependencies:
- conda-forge::bioimageio.core>=0.9.4
- onnxruntime
- pip
Conda Environment Comparison 1
bioimageio.core not found onnxruntime not found
Recommended Conda Environment 2
%YAML 1.2
---
channels:
- conda-forge
- nodefaults
dependencies:
- conda-forge::bioimageio.core>=0.9.4
- mkl ==2024.0.0
- numpy <2
- pip
- pytorch==1.10.0
- setuptools <70.0.0
- torchaudio==0.10.0
- torchvision==0.11.0
Conda Environment Comparison 2
bioimageio.core not found mkl not found numpy not found pytorch not found setuptools found but
mismatch. Specification pkg: setuptools[version='<70.0.0'], Running pkg:
setuptools=80.9.0=py313h06a4308_0 torchaudio not found torchvision not found
Recommended Conda Environment 3
%YAML 1.2
---
channels:
- conda-forge
- nodefaults
dependencies:
- conda-forge::bioimageio.core>=0.9.4
- pip
- pytorch==2.8.0
- torchaudio==2.8.0
- torchvision==0.23.0
Conda Environment Comparison 3
bioimageio.core not found pytorch not found torchaudio not found torchvision not found
To test your model replace the already published model identifier 'affabl-shark' with a local folder or path to a bioimageio.yaml file. Check out the bioimageio.spec documentation for more information on the bioimage.io metadata description format.
The Python equivalent would be:
CLI: bioimageio predict¤
You can use the bioimageio Command Line Interface (CLI) provided by the bioimageio.core package to run prediction with a bioimageio compatible model in a suitable Python environment.
bioimageio predict --help
usage: bioimageio predict [-h] [--inputs List[{str,List[str]}]] [--outputs {str,Tuple[str,...]}] [--overwrite | --no-overwrite] [--blockwise | --no-blockwise] [--stats Path]
[--preview | --no-preview] [--weight-format {keras_hdf5,onnx,pytorch_state_dict,tensorflow_saved_model_bundle,torchscript,any}] [--example | --no-example]
SOURCE
Run inference on your data with a bioimage.io model.
positional arguments:
SOURCE Url/path to a (folder with a) bioimageio.yaml/rdf.yaml file
or a bioimage.io resource identifier, e.g. 'affable-shark'
options:
-h, --help show this help message and exit
--inputs List[{str,List[str]}]
Model input sample paths (for each input tensor)
The input paths are expected to have shape...
- (n_samples,) or (n_samples,1) for models expecting a single input tensor
- (n_samples,) containing the substring '{input_id}', or
- (n_samples, n_model_inputs) to provide each input tensor path explicitly.
All substrings that are replaced by metadata from the model description:
- '{model_id}'
- '{input_id}'
Example inputs to process sample 'a' and 'b'
for a model expecting a 'raw' and a 'mask' input tensor:
--inputs="[[\"a_raw.tif\",\"a_mask.tif\"],[\"b_raw.tif\",\"b_mask.tif\"]]"
(Note that JSON double quotes need to be escaped.)
Alternatively a `bioimageio-cli.yaml` (or `bioimageio-cli.json`) file
may provide the arguments, e.g.:
```yaml
inputs:
- [a_raw.tif, a_mask.tif]
- [b_raw.tif, b_mask.tif]
```
`.npy` and any file extension supported by imageio are supported.
Aavailable formats are listed at
https://imageio.readthedocs.io/en/stable/formats/index.html#all-formats.
Some formats have additional dependencies.
(default factory: PredictCmd.<lambda>)
--outputs {str,Tuple[str,...]}
Model output path pattern (per output tensor)
All substrings that are replaced:
- '{model_id}' (from model description)
- '{output_id}' (from model description)
- '{sample_id}' (extracted from input paths)
(default: outputs_{model_id}/{output_id}/{sample_id}.tif)
--overwrite, --no-overwrite
allow overwriting existing output files (default: False)
--blockwise, --no-blockwise
process inputs blockwise (default: False)
--stats Path path to dataset statistics
(will be written if it does not exist,
but the model requires statistical dataset measures)
(default: dataset_statistics.json)
--preview, --no-preview
preview which files would be processed
and what outputs would be generated. (default: False)
--weight-format {keras_hdf5,onnx,pytorch_state_dict,tensorflow_saved_model_bundle,torchscript,any}, --weights-format {keras_hdf5,onnx,pytorch_state_dict,tensorflow_saved_model_bundle,torchscript,any}, --weight_format {keras_hdf5,onnx,pytorch_state_dict,tensorflow_saved_model_bundle,torchscript,any}, --weights_format {keras_hdf5,onnx,pytorch_state_dict,tensorflow_saved_model_bundle,torchscript,any}
The weight format to use. (default: any)
--example, --no-example
generate and run an example
1. downloads example model inputs
2. creates a `{model_id}_example` folder
3. writes input arguments to `{model_id}_example/bioimageio-cli.yaml`
4. executes a preview dry-run
5. executes prediction with example input
(default: False)
Create a local example and run prediction locally:
bioimageio predict affable-shark --example
0:00:01.721395 | INFO | cli - starting CLI with arguments:
['/opt/hostedtoolcache/Python/3.12.12/x64/bin/bioimageio',
'predict',
'affable-shark',
'--example']
0:00:01.722125 | INFO | cli - loaded CLI input:
{'predict': {'example': True, 'source': 'affable-shark'}}
0:00:01.722355 | INFO | cli - executing CLI command:
{'predict': {'blockwise': False,
'example': True,
'inputs': ['{input_id}/001.tif'],
'outputs': 'outputs_{model_id}/{output_id}/{sample_id}.tif',
'overwrite': False,
'preview': False,
'source': 'affable-shark',
'stats': 'dataset_statistics.json',
'weight_format': 'any'}}
0:00:02.745085 | INFO | io_utils - loaded affable-shark from https://hypha.aicell.io/bioimage-io/artifacts/affable-shark/files/rdf.yaml
0:00:01.749145 | INFO | cli - starting CLI with arguments:
['/opt/hostedtoolcache/Python/3.12.12/x64/bin/bioimageio',
'predict',
'--preview',
'--overwrite',
'--stats=affable-shark_example/dataset_statistics.json',
'--inputs=[["affable-shark_example/input0/001.tif"]]',
'--outputs=affable-shark_example/outputs/{output_id}/{sample_id}.tif',
'affable-shark']
0:00:01.750363 | INFO | cli - loaded CLI input:
{'predict': {'inputs': [['affable-shark_example/input0/001.tif']],
'outputs': 'affable-shark_example/outputs/{output_id}/{sample_id}.tif',
'overwrite': True,
'preview': True,
'source': 'affable-shark',
'stats': 'affable-shark_example/dataset_statistics.json'}}
0:00:01.750655 | INFO | cli - executing CLI command:
{'predict': {'blockwise': False,
'example': False,
'inputs': [['affable-shark_example/input0/001.tif']],
'outputs': 'affable-shark_example/outputs/{output_id}/{sample_id}.tif',
'overwrite': True,
'preview': True,
'source': 'affable-shark',
'stats': 'affable-shark_example/dataset_statistics.json',
'weight_format': 'any'}}
0:00:03.883297 | INFO | io_utils - loaded affable-shark from https://hypha.aicell.io/bioimage-io/artifacts/affable-shark/files/rdf.yaml
🛈 bioimageio prediction preview structure:
{'{sample_id}': {'inputs': {'{input_id}': '<input path>'},
'outputs': {'{output_id}': '<output path>'}}}
🔎 bioimageio prediction preview output:
{'1': {'inputs': {'input0': 'affable-shark_example/input0/001.tif'},
'outputs': {'output0': 'affable-shark_example/outputs/output0/1.tif'}}}
0:00:01.781906 | INFO | cli - starting CLI with arguments:
['/opt/hostedtoolcache/Python/3.12.12/x64/bin/bioimageio',
'predict',
'--overwrite',
'--stats=affable-shark_example/dataset_statistics.json',
'--inputs=[["affable-shark_example/input0/001.tif"]]',
'--outputs=affable-shark_example/outputs/{output_id}/{sample_id}.tif',
'affable-shark']
0:00:01.783099 | INFO | cli - loaded CLI input:
{'predict': {'inputs': [['affable-shark_example/input0/001.tif']],
'outputs': 'affable-shark_example/outputs/{output_id}/{sample_id}.tif',
'overwrite': True,
'source': 'affable-shark',
'stats': 'affable-shark_example/dataset_statistics.json'}}
0:00:01.783387 | INFO | cli - executing CLI command:
{'predict': {'blockwise': False,
'example': False,
'inputs': [['affable-shark_example/input0/001.tif']],
'outputs': 'affable-shark_example/outputs/{output_id}/{sample_id}.tif',
'overwrite': True,
'preview': False,
'source': 'affable-shark',
'stats': 'affable-shark_example/dataset_statistics.json',
'weight_format': 'any'}}
0:00:03.735071 | INFO | io_utils - loaded affable-shark from https://hypha.aicell.io/bioimage-io/artifacts/affable-shark/files/rdf.yaml
predict with affable-shark: 0%| | 0/1 [00:00<?, ?sample/s]
predict with affable-shark: 100%|██████████| 1/1 [00:00<00:00, 1.34sample/s]
predict with affable-shark: 100%|██████████| 1/1 [00:00<00:00, 1.34sample/s]
🎉 Sucessfully ran example prediction!
To predict the example input using the CLI example config file affable-shark_example/bioimageio-cli.yaml, execute `bioimageio predict` from affable-shark_example:
$ cd affable-shark_example
$ bioimageio predict "affable-shark"
Alternatively run the following command in the current workind directory, not the example folder:
$ bioimageio predict --overwrite --stats="affable-shark_example/dataset_statistics.json" --inputs="[[\"affable-shark_example/input0/001.tif\"]]" --outputs="affable-shark_example/outputs/{output_id}/{sample_id}.tif" "affable-shark"
(note that a local 'bioimageio-cli.json' or 'bioimageio-cli.yaml' may interfere with this)
Python: bioimageio.core.predict¤
Here is a code snippet to get started deploying a model in Python using the test sample provided by the model description:
from bioimageio.core import load_model_description, predict
from bioimageio.core.digest_spec import get_test_input_sample
model_descr = load_model_description("<model.yaml or model.zip path or URL>")
input_sample = get_test_input_sample(model_descr)
output_sample = predict(model=model_descr, inputs=input_sample)
Python: predict your own data¤
from bioimageio.core.digest_spec import create_sample_for_model
input_sample = create_sample_for_model(
model_descr,
inputs={{"raw": "<path to your input image>"}}
)
output_sample = predict(model=model_descr, inputs=input_sample)
Python: prediction options¤
For model inference from within Python these options are available:
- bioimageio.core.predict to run inference on a single sample/image.
- bioimageio.core.predict_many to run inference on a set of samples.
- bioimageio.core.create_prediction_pipeline for reusing the instatiated model and more fine-grain control over the inference process this function creates a suitable bioimageio.core.PredictionPipeline for more advanced use.
Other bioimageio.core functionality¤
CLI: bioimageio commands¤
To get an overview of available commands:
bioimageio --help
usage: bioimageio [-h] {validate-format,test,package,predict,update-format,update-hashes,add-weights,empty-cache} ...
bioimageio - CLI for bioimage.io resources 🦒
library versions:
bioimageio.core 0.9.6
bioimageio.spec 0.5.7.4
spec format versions:
model RDF 0.5.7
dataset RDF 0.3.0
notebook RDF 0.3.0
options:
-h, --help show this help message and exit
subcommands:
{validate-format,test,package,predict,update-format,update-hashes,add-weights,empty-cache}
validate-format Validate the meta data format of a bioimageio resource.
test Test a bioimageio resource (beyond meta data formatting).
package Save a resource's metadata with its associated files.
predict Run inference on your data with a bioimage.io model.
update-format Update the metadata format to the latest format version.
update-hashes Create a bioimageio.yaml description with updated file hashes.
add-weights Add additional weights to a model description by converting from available formats.
empty-cache Empty the bioimageio cache directory.
Python: API docs¤
See bioimageio.core.