Skip to content

How to use trained pixart lora for inference? #68

Closed Answered by surebert
segalinc asked this question in Q&A
Discussion options

You must be logged in to vote

This example will output an image called demo.png. Replace the checkpoint definition with the path to your checkpoint, replace the prompt with the promp

from pathlib import Path

import torch
from diffusers import PixArtAlphaPipeline, AutoencoderKL
from peft import PeftModel

checkpoint = Path('/path/to/your/build/checkpoint-500')
prompt = 'A small cactus with a happy face in the Sahara desert.'

vae = AutoencoderKL.from_pretrained(
    'stabilityai/sd-vae-ft-ema',
)
pipe = PixArtAlphaPipeline.from_pretrained(
    "PixArt-alpha/PixArt-XL-2-1024-MS",
    vae=vae,
    torch_dtype=torch.float32,
).to("cuda")
pipe.transformer = PeftModel.from_pretrained(pipe.transformer, checkpoint)

# Enable…

Replies: 2 comments

Comment options

You must be logged in to vote
0 replies
Answer selected by lawrence-cj
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants