dif
titterid
emilianJR/AnyLORAlora
pipe.load_lora_weights( ".", weight_name="lora.safetensors" )lora
pipe.safety_checker=Nonepipe
Pokémon loralora
refinersd
cuda 11.4, torch=2.0.0, xformers=0.0.19cuda
torch==1.11.0 - git clone --recursive https://github.com/facebookresearch/xformers.giterr
ImportError: cannot import name 'CLIPImageProcessor' from 'transformers' -> use >=4.29err
py>=3.8, tor >=1.7.0ver
load_lora_weights(unet,te) , load_attn_procs(unet)lora
python launch.py --ckpt my.ckptckpt
guoyww/AnimateDiffadif
guidance_scale img <-> propipe
strength=0.1 + num_inference_steps=50 = 0.1*50 steps of noisepipe
sd-concepts-library/gta5-artworkmod
pipe.load_textual_inversion( "sayakpaul/EasyNegative-test", weight_name="EasyNegative.safetensors", token="EasyNegative" )mod
images = pipeline.generate(prompts, uncond_prompts, seed=42)gen
https://hoshikat.hatenablog.com/tut
ctr + chaturctr
chatur loramod
from diffusers.utils import load_imageutils
https://runrunsketch.net/tut
sd.from_pretrained(mid, safety_checker=None, torch_dtype=torch.float16).to("cuda")pipe
IP-Adapter face model to apply specific faces to your imagesada
DDIMScheduler,EulerDiscreteScheduler for face model.sch
adaptorsada
https://note.com/npakatut
determine char with loradet
repeatable seedsseed
reusing_seedsseed
Yntec/3Danimationmod
…, BREAK white t-shirt, …pro
num_inference_steps=50, strength=0.8 == 50*0.8 stepsi2i
pipeline (prompt=prompt, image=init_image, strength=0.6, num_inference_steps=30).images[0]i2i
Meina/MeinaMix_V10 - japmod
sdxl, kandinskyi2i
80-90% likenessctr
consistent-ai-character - charturnerdet
charturnermod
pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)inv
unidiffuseruni
diffusers.StableDiffusionControlNetImg2ImgPipelinectr
from controlnet_aux.processor import Processorctr
pip install controlnet-aux==0.0.7ctr
torch=2.1.2, xformers=0.0.23.post1tor
if torch>=2.0, x-transformers (X)tor
up-adaptoripa
quantizationopt
from diffusers import StableVideoDiffusionPipelinesvd
mid="stabilityai/stable-video-diffusion-img2vid-xt"svd
openmmlab/PIA-condition-adapterpia
i2vgen-xli2v
difs adifadif
keyerror:sample -> img=pipe(prompt).images[0]wd
difs - ckptckpt
inpaintctrl
wd prowd
ctrl modelsctrl
Torch not compiled with CUDA enablederr
Import error from pytorch lightning -> pip install pytorch_lightning==1.7.7err
3Dillustration-stable-diffusion - not good3d
model_id = "aidystark/3Dillustration-stable-diffusion"; mind the space3di
hakurei-wdwd
wd1.4a == sd1.2wd
opt sdopt
source .venv/bin/activatewebui
no space left -> pip install --cache-dir=/home/user/tmp ...webui
img2imgwebui
$p scripts/txt2img.py \ --prompt $tex \ --plms --ckpt $pt \ --skip_grid --n_samples 1sd
AttributeError: module 'cv2.dnn' has no attribute 'DictValue' -> opencv2-python=4.8.0.74cv2
import pipei2i
py=3.8.5, pytorch==1.11.0, torchvision==1.12.0sd
pip install taming-transformers-rom1504sd
imwatermark -> invisible-watermarksd
ln -s models/ldm/stable-diffusion-v1/model.ckpt mod