Instructions to use lilylilith/AnyPose with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use lilylilith/AnyPose with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Qwen/Qwen-Image-Edit-2511", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("lilylilith/AnyPose") prompt = "Turn this cat into a dog" input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") image = pipe(image=input_image, prompt=prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
How to use base and helper files
#1
by Tonygeek - opened
First, thanks for this LoRA. I am about to experiment with it.
For us noobs it would be nice to explain right from the start how are two files supposed to be used. Are they both LoRAs that get connected one after another as if they are two separate LoRAs?
You should hook them up one after another and set the strength to 0.7 each. Or you could use the Powerlora Loader from the rgthree node pack (can recommend it highly).