Diffusers documentation
How to use the ONNX Runtime for inference
Get started
Using Diffusers
Loading & Hub
Loading Pipelines, Models, and SchedulersUsing different SchedulersConfiguring Pipelines, Models, and SchedulersLoading and Adding Custom PipelinesUsing KerasCV Stable Diffusion Checkpoints in Diffusers
Pipelines for Inference
Unconditional Image GenerationText-to-Image GenerationText-Guided Image-to-ImageText-Guided Image-InpaintingText-Guided Depth-to-ImageControlling generationReusing seeds for deterministic generationReproducibilityCommunity PipelinesHow to contribute a PipelineUsing safetensors
Taking Diffusers Beyond Images
Optimization/Special Hardware
Training
OverviewUnconditional Image GenerationTextual InversionDreamboothText-to-image fine-tuningLoRA Support in Diffusers
Conceptual Guides
API
Main Classes
Pipelines
OverviewAltDiffusionAudio DiffusionCycle DiffusionDance DiffusionDDIMDDPMDiTLatent DiffusionPaintByExamplePNDMRePaintSafe Stable DiffusionScore SDE VESemantic Guidance
Stable Diffusion
OverviewText-to-ImageImage-to-ImageInpaintDepth-to-ImageImage-VariationSuper-ResolutionStable-Diffusion-Latent-UpscalerInstructPix2PixAttend and ExcitePix2Pix ZeroSelf-Attention GuidanceMultiDiffusion PanoramaText-to-Image Generation with ControlNet Conditioning
Stable Diffusion 2Stable unCLIPStochastic Karras VEUnCLIPUnconditional Latent DiffusionVersatile DiffusionVQ DiffusionSchedulers
OverviewDDIMDDIMInverseDDPMDEISDPM Discrete SchedulerDPM Discrete Scheduler with ancestral samplingEuler Ancestral SchedulerEuler schedulerHeun SchedulerIPNDMLinear MultistepMultistep DPM-SolverPNDMRePaint SchedulerSinglestep DPM-SolverStochastic Kerras VEUniPCMultistepSchedulerVE-SDEVP-SDEVQDiffusionScheduler
Experimental Features
You are viewing v0.14.0 version. A newer version v0.38.0 is available.
How to use the ONNX Runtime for inference
🤗 Diffusers provides a Stable Diffusion pipeline compatible with the ONNX Runtime. This allows you to run Stable Diffusion on any hardware that supports ONNX (including CPUs), and where an accelerated version of PyTorch is not available.
Installation
- TODO
Stable Diffusion Inference
The snippet below demonstrates how to use the ONNX runtime. You need to use StableDiffusionOnnxPipeline instead of StableDiffusionPipeline. You also need to download the weights from the onnx branch of the repository, and indicate the runtime provider you want to use.
# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionOnnxPipeline
pipe = StableDiffusionOnnxPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
revision="onnx",
provider="CUDAExecutionProvider",
)
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]Known Issues
- Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching.