Text-to-Image
Diffusers
Safetensors
English
Text-to-Image
ControlNet
Diffusers
Flux.1-dev
image-generation
Stable Diffusion
Instructions to use Shakker-Labs/FLUX.1-dev-ControlNet-Depth with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Shakker-Labs/FLUX.1-dev-ControlNet-Depth with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Shakker-Labs/FLUX.1-dev-ControlNet-Depth", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Did you write it wrong
#1
by demo001s - opened
Thank you for pointing out!
哈哈
我发现这个模型在人物方面还可以,但是试了几张室内的图,不太好用。
wanghaofan changed discussion status to closed
wanghaofan changed discussion status to open
Yes, it should work better on human generation. This is because of the limited diversity of training data, we will continue train on more data.
there it is.
https://huggingface.co/datasets/kadirnar/fluxdev_controlnet_16k
Wow, I have just had a look at these images, just for curiosity. Are both the images and the text automatically generated? I only read 4 of them, and they are all wrong one way or another. I am not sure I'd use these for training.
这个是别的人, 我看了一些,基本上没有描述错的吧。不知道你说的哪些。
