Instructions to use stabilityai/stable-diffusion-3-medium-diffusers with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use stabilityai/stable-diffusion-3-medium-diffusers with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
incorrect use of T5 model?
#39
by X-niper - opened
We compare the fp16 T5 model in this repo and the fp32 T5 model with the same parameters and find that the outputs for the same text prompt are not the same.
Possible solution may be: cast the model to fp32 and do inference with autocast(bf16) or directly with fp32.