Comfyui lora strength reddit. Please keep posted images SFW.

Comfyui lora strength reddit Better yet output trigger words right into the prompt I see a lot of tutorials demonstrating LoRa usage with Automatic111 but not many for comfyUI. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will You adjust the weights in the prompt, like: <lora:catpics_lora:0. most artists develop a particular style over the course of thier life time, these styles often change based off the medium they Welcome to the unofficial ComfyUI subreddit. “I don’t even see the prompts anymore. As you can see, it's not simply scaling strength, the concept can change as you increase the smooth step. Is there an efficient way of affecting its strength depending on the prompt? For example, if "night" is in the prompt I want the strength of the LoRA to be low. 75, which is used for a new txt2img generation of the same prompt at a standard 512 x 640 pixel size, using CFG of 5 and 25 steps with uni_pc_bh2 sampler, but this time adding the character LoRA for the woman featured (which I trained myself), and here I switch to Wyvern v8 I want to create flexible images with a LoRA that has a lot of strength in terms of clothing. 8> Red head” Oh, another Lora tip to throw on the bonfire: Since everything is a mix of a mix of a mix watch out for Lora ‘overfitting’ that makes your images look like deep fried memes. i save each Lora as a style in style. I recently heard about Prompt Editing which means start or stop a part of the prompt after some step. 0>, " ? is there a comfyui discord server ? 5. For example: Photo of [<lora:abdwd:0. SDXL-mythical creature LoRA Locked post. Hi guys. Open comment sort options /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers Welcome to the unofficial ComfyUI subreddit. Stopped linking my models here for that very reason. Isn't that the truth of the day. Comfy does the same just denoting it negative (I think it's referring to the Python idea that uses negative values in array indices to denote last elements), let's say ComfyUI is more programmer friendly; then 1(a111)=-1(ComfyUI) and so on (I mean the clip skip values and no You could convert the model strength to an input rather than a value set on the node, then wire up a single shared float input to each lora's model strength. This is incredibly useful if you're like me and have thousands of LORAs. Yes, it hurts speed a lot, because normally, A1111 only does the "compute changes to the model weights caused by the lora" once at the start of the generation, and then reuses those weights for the rest of the generation. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper 19K subscribers in the comfyui community. I have rgthree, which allows you to auto-fill. . selfie. this value should be greater I wanted to see how fast I could push this new LCM lora. (Stable Diffusion XL + Lora Model) youtu. Get the Reddit app Scan this QR code to download the app now You can use a LoRA in ComfyUI with either a higher strength + no trigger or use it with a lower strength plus trigger words in the prompt, more like you would with A1111. Sort by: I had to set the strength and clip strength to 2-3 but it still StabilityAI just release new ControlNet LoRA for SDXL so you can run these on your GPU without having to sell a kidney to buy a new one. You switched accounts on another tab or window. So to use them in ComfyUI, load them like you would any other LoRA and change the strength to somewhere between 0. A LoRA mask is essential, given how important LoRAs in current ecosystem. If the denoising strength must be brought up to generate something interesting, controlnet can help to retain As usually animateDiff has trouble keeping consistency, i tried making my first Lora. 5 and I created a TensorRT SD Unet Model for a batch of 16 @ 512X 512. In About two years ago, u/ruSauron transformed my approach to LoRA training when he posted this on Reddit Performing block weight analysis can significantly impact how your Adding the LoRA stack node in ComfyUI. 21K subscribers in the comfyui community. 5) and of course i get similar results while generating as i 136 votes, 59 comments. (Simple Workflow). This guide, inspired by 御月望未's tutorial, explores a technique for significantly enhancing the detail and color in /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5 Version here with Photon Model @ 512X512 Res at 4 Steps Sampler Euler a CFG Scale 1. Is a 12GB GPU sufficient to render with bf16? I have not tried that. ComfyUI workflow for Reddit user, _roblaughter_, discovered a severe security issue in the ComfyUI_LLMVISION node created by user u/AppleBotzz. I find it starts to look weird if you have more than three LoRA at the same time. Up to 7fps now using SD-Hyper 1 Step LORA - perfect for little Wizards. 22K subscribers in the comfyui community. I'm still experimenting and figuring out a good workflow. So to replicate the same workflow in ComfyUI, insert a LoRA, set the strength via the loader's slider, and do not insert anything special in the prompt. And above all, BE NICE. Please share your tips, tricks, and workflows for using this software to create your AI art. Discord bot users lightly familiar with the models) to supply prompts that involve custom numeric arguments (like # of diffusion steps, LoRA strength, etc. If we've got LoRA loader nodes with actual sliders to set the strength value, I've not come across them yet. Comfyui Tutorial: LoRA Inpainting Process with Turbo model Share Add a Comment. a value of 0 would start from the first step; a value of 0. From chatgpt: Guide to Enhancing Illustration Details with Noise and Texture in StableDiffusion (Based on 御月望未's Tutorial) Overview. 20K subscribers in the comfyui community. 12K subscribers in the comfyui community. Any advice or resource regarding the topic would be greatly appreciated! /r/StableDiffusion is back open after the protest of Reddit killing open Welcome to the unofficial ComfyUI subreddit. If one set is smaller than the the other then the smallest will be padded with what is present in the LoRA strength counterpart. Just beefed up the Power Prompt and added lora selection support. The image comes out looking dappled and fuzzy, not nearly as good as ddim for example. It takes about 2 hours to generate a 768,768 model, although I do need to turn on gradient checkpointing otherwise, I receive CUDA OOM errors Welcome to the unofficial ComfyUI subreddit. Download this extension: stable-diffusion-webui-composable-lora A quick step by step for installing extensions: Click the Extensions tab within the Automatic1111 web app> Click Available sub-tab > Load from > search composable LORA > install > Then restart the web app and reload the UI As usually animateDiff has trouble keeping consistency, i tried making my first Lora. the 5th goes into the sampler. More info: https://rtech. This is because the model's patch for Lora is applied regardless of the presence of the trigger word. 8 to 1, it will work much better. There are custom nodes to mix them, loading them altogether, but Done. The issue has been that Automatic1111 didn't support this initially, so people ended up trying to set-up work arounds. 000 means it is disabled and will be bypassed. Its as if anything this lora is included in gets corrupted, regardless of strength? Using only the trigger word in the prompt, you cannot control Lora. Right: Increased smooth step strength No lora applied, scaled down 50%. I tested all of them which are now accompanied with a ComfyUI workflow that will get you started Welcome to the unofficial ComfyUI subreddit. Also, I heard at some point that the prompt weights are calculated differently in comfyui, so it may be that the non-lora parts of the prompt are applied more strongly in comfy than a1111. As far as I know, sd_xl_offset_example-lora_1. What am I doing wrong? I played 44 votes, 54 comments. type lora there and you should find a LoraLoader, then I've developed a comfyui extension that offers a wide range of LoRA merge techniques (including dare). My advice as a long time art generalist in both physical and digital mediums with the added skills of working in 3d modelling and animation. It's not unusual for me to use loras in ComfyUI at a strength of 5-10. 5 with the following settings: LCM lora strength 1. 24K subscribers in the comfyui community. Where do I want to change the number to make it stronger or weaker? In the Loader, or in the prompt? Both? Thanks. Then I make two basic pipes, one with LoRA A and one with LoRA B, feed the model/clip each into a separate conditioning box. In Comfy UI, you don't need to use the trigger word (especially if it's only one for the entire LoRA), mess with the strength_model setting in the LoRA loader instead. Open comment sort options /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper 13K subscribers in the comfyui community. r/comfyui It then applies ControlNet (1. I’m part of a team diving into the world of LoRA models (training on Kohya) for automotive projects. To test, render once with 1024x1024 at strength 0. A lot of people are just discovering this technology, and want to show off what they created. It is available at civitai, but not always in the accompanying json files. Question | Help I'm trying ComfyUI for SDXL, but not sure how to use loras in this UI. 0>, " if written in the -prompt without any other lora loading do its job ? in efficiency nodes if i load easynegative and give it a -1 weight dose it work like a -prompt imbed ? do i have to use the trigger word for loras i imbed like this "<lora:easynegative:1. It would be great for a node to show the trigger word for a given Lora as part of the flow. Some prompts which work great without the Lora produce terrible results. It was a bit trickier to capture the keys to make fast manipulation done but, like with other phrases, you can ctrl+up/down arrow to change the strength of the loras (just like embedding). The leftmost column is only the lora, Down: Increased lora strength. If you're not seeing much difference trying bumping up the "strength_model" of the lora. Never set Shuffle, Normal BAE to high or it is like an inpainting. Whereas a single wildcard prompt can range from 0 LoRAs to 10. 75, and and an end percent of 0. I found I can send the clip to negative text encode . Here is a link to the workflow if you can't see the image clear enough. Used the same as other lora loaders (chaining a bunch of nodes) but unlike the others it has an on/off switch. It's already working in ComfyUI, just load slider as a LoRA and change strength_model value /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app I'm new to ComfyUI and using stable diffusion in general. What am I doing wrong here in ComfyUI? The Lora is an Asian woman Share Add a Comment. I was using it successfully for SD1. You can do that with the name of a LoRA, and instead of completing the text, you can click the little "i" symbol to bring up a window that seems to scrape Welcome to the unofficial ComfyUI subreddit. Both character and environment. Checkpoints --> Lora. More info: https://rtech What is the easiest way to have a lora loader and Ipadapter plugged to one Ksampler ? Ive tried the simpleModel Merge But it doesnt seems to activate the Ipadapters. 0 and the impact The intended way to use SDXL is that you use the Base model to make a "draft" image and then you use the Refiner to make it better. I recommend more-details. Currently I have the Lora Stacker from efficiency nodes, but it works only with the propietary Efficient KSampler node, and to make it worse the repository has been archived on Jan 9, 2024, meaning it could permanently stop working with the next comfyui update any it works but use adetailer with turbo and lightning models. 0. The output from the latter is a model with all the LoRAs included, which can then route into your KSampler Welcome to the unofficial ComfyUI subreddit. Option a) t2i + low denoising strength + controlnet tile resample b) i2i inpaint + controlnet tile resample (if you want to maintain all texts) As of Monday, December 2nd, ComfyUI now supports masking and scheduling LoRA and model weights natively as part of its conditioning system. So, I thought of possible explanations: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. FreeU_V2. - Lora strength_model 0. Specifically changing the motion_scale or lora_strength values during the video to make the video move in time with the music. Yeah, unfortunately this means that you can't use Lora in wildcards in ComfyUI without an ungodly amount Welcome to the unofficial ComfyUI subreddit. But it's not really predictable how it's changing. Share Sort by: Best. It seems on the surface that LoRA stackers should give about the same result as breaking out all the individual loaders, but my results always seem to be extremely different (worse) when using the same However, I've tried playing around with the Denoising strength, and while my Hell-Spawn are replaced with beautiful faces, the original look of my LORA's are altered. ; strength_end: the lora strength at the last keyframe to be created. 0 to +1. Reload to refresh your session. 5 8 steps CFG LoRA strength: 1. People have been extremely spoiled and think the internet is here to give away free shit for them to barf on - instead of seeing it as a collaboration between human minds from different economic and cultural spheres binding together to create a global culture that elevates people and where we give 5. I've only yesterday started using LoRA's. New comments cannot be posted. g. More info: https://rtech Welcome to the unofficial ComfyUI subreddit. 5 you can easily prompt a background and other things. Need help with Lora and faceswap workflow . Top 4% Rank by size . 2. 5 based and you are using it with an SDXL 1. Because adetailer getting confuse with too much lora. 6 and adetailer lora strength 0. 0, and some may support values outside that range. 0 Scheduler settings: CFG Scale: 1. Previously I used to train LoRAs with Kohya_ss, but I think it would be very useful to train and test LoRAs directly in ComfyUI. Please share your tips, tricks, and I want to know if the order of wich the lora are connected matter or is it just the strength of the lora that matter? Civitai LoRA generator and Comfyui - what/where is the trigger? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude For a slightly better UX, try a node called CR Load LoRA from Comfyroll Custom Nodes. 18K subscribers in the comfyui community. In A1111 you can get a custom plugin that allows you to see a small poster of the LORA and Checkpoint or Embedding along with information. More info: https I'm trying to configure ComfyUI with Animatediff using a motion lora. The image is being accepted and rendered but I'm not getting any motion. For the lowest row I used the more-details lore with a negative strength to get a more cartoony look. There is one thing bothering me - not being able to visualize the LORAs. Hi all, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and In the Lora Loader, I set the strength to "1" essentially turning it "on" In the Prompt, I'm calling the Lora <lora:whatever:1. 5 not XL) I know you can do this by generating an image of 2 people using 1 lora (it will make the same person twice) and then inpainting the face with a different lora and use openpose / 49 votes, 21 comments. /r/StableDiffusion is back open after the protest of Reddit killing open API Loras do seem to work quite a bit differently in ComfyUI than they do Automatic1111, as I usually have to use loras a much higher strength in Comfy than I do in A1111. If you have installed and used this node, your sensitive data, including browser passwords, credit card information, and browsing history, may have been compromised and sent to a Discord server via webhook. do the upscaled passes with sd15 as everyone else does and let sdxl only do the initial composition In your case, i think it would be better to use controlnet and face lora. Anyone has an easy solution ? ps: Iv'e tried to pass the IPadapter into the model for the Lora, and then plug it to Ksampler. Please keep posted images SFW. More consistency, higher resolutions and much longer videos too. Please share your tips, tricks, and Since adding endless lora nodes tends to mess the simplest workflow, I'm looking for a plugin with a lora stacker node. To prevent the application of Lora that is not used in the prompt, you need to directly connect the I have a scene were a character lora gets styled into a given situation, i. This slider is the only setting you have access to in A1111. My computer Welcome to the unofficial ComfyUI subreddit. 000 and ControlNet strength 0. 0+for stacked Lora so making the change in the weight of the lora can make huge different in image but with stacked Lora it becomes time-consuming Get the Reddit app Scan this QR code to download the app now I read or saw somewhere that if you didn't want to mention the trigger words you would have to adjust the strength of the LoRA I've trained a LoRA with two different photo sets/modes, and different trigger (unique trained) words to distinguish them, but was using A1111 (or Vlad The leftmost column is only the lora, Down: Increased lora strength. loractl works by causing A1111 to recompute those weights for each step where the weights would be different than the previous step. TIL that you can check lora metadata (you can set the activation prompts, strength, and even the training parameters) 89 votes, 24 comments. Hi everyone, I am looking for a way to train LoRA using ComfyUI. Heres’s mine: I use a couple of custom nodes -LoRA Stacker (from the Efficiency Nodes set) along feeding into CR Apply LoRA Stack node (from the Comfyroll set). Maybe increase lora strength or that lora is not compatible, Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. /r/StableDiffusion is back open after the protest of Welcome to the unofficial ComfyUI subreddit. if I want to add a LoRa to this, where would the CLIP connect to? Right now, I have the LoRA connected like this: Welcome to the unofficial ComfyUI subreddit. My proposition inside THE LAB is this: Write the MultiArea prompts as if you would use all the LoRAs at the same time. 23K subscribers in the comfyui community. 0 to 1. - At the latest in the second step the golden CFG must be used. It'd be nice to have the Lora output into an actual workflow. 0 checkpoint using out-of-range dimensions. I use fp16. Hair is a different color / style etc. I do use a batch size of 6 and render a X/Y plot with LORA strength against model variations. Can I ask what Denoise strength you prefer? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. of the images takes ages? Or do they fluctuate greatly? For comparison: 20-30 seconds for a 1024x1024 image without Lora and with Lora sometimes between 300 - 500 seconds. What was wondering was if upscale benefits from using LoRA. The negative has a Lora loader. /r/StableDiffusion is back strength_start: the lora strength at the first keyframe to be created. And also, value the generations with the same LoRA strength from 1 to 5 according to how well the concept is represented. also “bypass” the Lora loader with ctrl Welcome to the unofficial ComfyUI subreddit. But I can’t seem to figure out how to pass all that to a ksampler for model. be upvotes r/comfyui. Until then, I've light a candle to the gods of Copy & Paste and created the Lora vs Lora plot in a workflow. The process was: Create a 4000 * 4000 grid with pose positions (from open pose or Maximo etc), then use img2img in comfyUI with your prompt, e. But rather then make that text file manually, you can right click on the name of the lora, you can right click on the As the train continues, you can test them in another pod with the HunyuanVideo Lora Select node in ComfyUI after you upload the files to the /loras/ folder. 7K subscribers in the comfyui community. I think you'll have to find a specific lora for this specific pose, or contronet the hell out of this with openpose, maybe regional prompting too /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt A LoRA affects the model output, not the conditioning, so MultiArea doesn’t help here. ComfyUI LORA . More posts you may like /r/StableDiffusion is back open after the protest of Reddit I've trained a LoRA using Kohya_ss, however when I load the LoRA in ComfyUI and use the tag words, it doesn't generate the image e. at frame 1 lora strength = -5 then at frame 100 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude Now I want to use a video game character lora. Be the first to comment Nobody's responded to this post yet. Ksampler takes only one model. 25. 0, again at 1. FreeU_V2 (and the old FreeU module) give a massive quality increase at low steps. Main prompt lora strength 0. If any input should change the iterator starts over from a clean slate On a1111 the positive "clip skip" value is indicated, going to stop the clip before the last layer of the clip. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Idk if it is done like this but what I would do is generate few, let's say 6 images with the same prompt and LoRA intensity for each methodology to test and ask a random five people to give scores to each group of six. 2 for ComfyUI (XY Plot, ControlNet/Control-LoRAs, Fine-tuned SDXL models, SDXL Base+Refiner, ReVision, Detailer, 2 Upscalers, Prompt Builder, etc. Im quite new to ComfyUI. 5 for converting an anime image of a character into a photograph of the same character while preserving the features? I am struggling hell just telling me some good controlnet strength and image denoising values would already help a lot! I want to automate the weight adjustment of the Lora weight, I would like to generate multiple images for every 0. All I see is (masterpiece) Blonde, 1girl Brunette, <lora:Redhead:0. Adding the LoRA stack node in ComfyUI. 6> : 10] wearing a white dress I understand how outpainting is supposed to work in comfyui (workflow here - https: Up to 7fps now using SD-Hyper 1 Step LORA - perfect for little Wizards. I’m starting to dream in prompts. Top 5% /r/StableDiffusion is back open after the protest of Reddit killing open API The Lora has improved with the step increases. Lmao. If you use it together with the SDXL face adapter with strength 0. csv (using the form of Learn about the LoraLoader node in ComfyUI, which is designed to dynamically load and apply LoRA (Low-Rank Adaptation) adjustments to models and CLIP instances based on specified strengths and LoRA file names. Then you just need to set it to 0 in one place if you want to disable them. After experimenting with it Welcome to the unofficial ComfyUI subreddit. e: Bob as a paladin riding a white horse in a shining armour. Model + Lora 100% Model + Lora 75% Model + Lora 50% And then tweak around as necessary PS: Also works for controlnet with ConditioningAverage node, especially considering high strength controlnet in low resolution will look jagged sometimes in higher res output so lowering the effect in the hiresfix steps can mitigate the issue. /r/StableDiffusion /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Please share your tips, tricks, and I cannot find settings that work well for SDXL with LCM Lora. However, the prompt makes a lot of difference. - If you set all ControlNet strength to 0. 4x KSampler - ELI5: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I can get it to work just fine with a text prompt but I'm trying to give it a little more control with an image input. 97 votes, 17 comments. Doing in comfyui or any other other SD UI dont matter for me, only that its done locally. I am just starting out with comfyui, so I need some advice. r/StableDiffusion • NICE DOGGY - Dusting off my method again as it still seems to give me more control than AnimateDiff or Pika/Gen2 etc. A detail lora such as is highly recommended, At low steps most realistic and semi realistic models benefit for added "details". Take a Lora of person A and a Lora of Person B, place them into the same photo (SD1. and drop it into a 'simple' vid2vid workflow that primarily offers a customizeable lora stack that you can use to update the style while ensuring same shape/outline/depth and then outputting a new vid at Welcome to the unofficial ComfyUI subreddit. 0 / 1. 3> : <lora:abdwd:0. Or just skip the lora download python code and just upload the lora manually to the loras folder. I guess my formulated question was a bit vague. The extension also provides XY plot components to better evaluate merge settings. But where do I begin, anyone know any good tutorials for a lora training beginner. 0 works very Thanks for your input and for taking the time to create that screen. ) and b) (core part) run these workflows with progress updates without worrying about the details of WebSockets and so forth. 60, the IP adapter runs from 3 to 10 steps, easing out on the strength which is set at 1. I was using the SD 1. More info: https AP Workflow 3. I use it in the 2nd step of my workflow where I create the realistic image with the control net inputs. Welcome to the unofficial ComfyUI subreddit. Hello, I am new to stable diffusion and I tried fine-tuning using LoRa. For the Controlnet, I use t2i-adapter_xl_sketch, initially set to strength of 0. Been playing around with ComfyUI and got really frustrated with trying to remember what base model a lora uses and its trigger words. It facilitates the Welcome to the unofficial ComfyUI subreddit. It is compatible with all models. Segmind SD + LCM Lora not giving me viable results with custom lora Welcome to the unofficial ComfyUI subreddit. So just add 5/6/however many max loras you'll ever use, then turn them on/off as needed. and "turn them off/on" by either zeroing the strength_model and strength_clip, or assigning But adding trigger words in the promt for a lora in ComfyUI does Welcome to the unofficial ComfyUI subreddit. Thanks - pretty similar. 1) using a Lineart model at strength 0. My only complaint with the Lora training node is that it doesn't have an output for the newly created Lora. Denoise is 0. ComfyUI only allows stacking LoRA nodes, as far as I know. And some LoRA do not play well with some checkpoint models. There are many regional conditioning solutions available, but as soon as you try to add LoRA data to the conditioning channels, the LoRA data seems to overrun the whole generation. When I use this LORA it always messes up my image. 19K subscribers in the comfyui community. CLIP Strength: Most LoRAs don't contain any text token training (classification labels for image concepts in the LoRA data set). But I've seen it enhance features with some loras. Some may work from -1. When you have a Lora that accepts float strength values between -1 and 1, how can you randomize this for every generation? high clip strength makes your prompt activate the features in the training data that were captioned, and also the trigger word. 0 (I should probably have put the clip_strength to 0 but I did not) sampler: Euler scheduler: Normal steps: 16 My favorite recipe was with the Restart KSampler though, at 64 steps, but it had its own limitations (no SGM_Uniform scheduler for AnimateDiff). (i don't need the plot just individual images so i can compare myself). I feed the latent from the first pass into sampler A with conditioning on the left hand side of the image (coming from LoRA A), and sampler B with right-side conditioning (from LoRA B). It worked normally with the regular 1. yaml file uses an intuitive indentation method for its files within the Lora folder, however the same doesn't seem to be the case in Comfy and any variances I after getting 'Lora stacker' i am only getting these weird kinds of results, i dont understand what im doing wrong, im following a tutorial on youtube and he seems to have no issue using the lora stacker Welcome to the unofficial ComfyUI subreddit. It's so fast! | LCM Lora + Controlnet Openpose + Animatediff (12 steps decreasing the lora strength removing negative prompts decreasing/increasing steps messing with clip skip None of it worked and the outcome is always full of digital artifacts and is completely unusable. Styles are simply a technique which helps an artist to create consistantly good images that they and others will enjoy. Works well, but stretches my RAM to the absolute limit. The image below is the workflow with LoRA Stack added and connected to the other nodes. Belittling their efforts will get you banned. Is it the right way of doing this ? 16K subscribers in the comfyui community. 0 is the best way to control the general brightness or darkness of an image. And a few Lora’s require a positive weight in the negative text encode. 5 Steps: 4 Scheduler: LCM LoRA: Hyper SD 1. and Trained the Lora with the LCM Model in the TensorRT LoRA tab also. sexy,<lora:number1:1. Reply reply More replies. So my though is that you set the batch count to 3 for example and then you use a node that changes the weight for the lora on each bath. ; start_percent: when to start applying the lora keyframes, eg: . i converted to ComfyUI from A1111 for a year now. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt Hello! I been playing around with comfyui for months now and reached a level where I wanna make my own loras. LCM Lora & LCM native Fastest SDXL Locked post. 5 version stable diffusion, however when i tried using it with other models, not all worked. Is the lora in the lora folded for comfyui? No matter what strength the lora was set to, the image stayed the same. Just use your lora with adetailer dont add anything beside it. I've settled on simple prompts that include some of face and body features I'm aiming for. I'd say one image in each batch of 8 is a keeper. support/docs You signed in with another tab or window. 5>, and play around with the weight numbers until it looks how you want. So using the same type of prompts like he is doing for pw_a, pw_b, etc. Showing the LoRA stack connected to other I didn't use strength_clip as I discovered that it had little impact on achieving the desired results of the LoRA. Maybe try putting everything except the lora trigger word in ( prompt Welcome to the unofficial ComfyUI subreddit. I controlled for all variables except the LoRA strength_model, That'll list any text files in a folder with the name of the lora. This may need to be adjusted on a drawing to drawing basis. so I wrote a custom node that Posted by u/cgpixel23 - 1 vote and no comments /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. If you're not seeing I follow his stuff a lot trying to learn. Whenever I use models like lora and it require trigger words, I'm a little bit confuse if I need to use the automatic dropdown feature Or do something even more simpler by just paste the link of the loras in the model download link and then just change the files to the different folders. ) Welcome to the unofficial ComfyUI subreddit. 5><lora:number2:1> <lora:number3:1> <lora:number4:1> (notice lora 1 at strength 1. The update also has an extensive ModelPatcher rework and introduction of wrappers and callbacks to make custom node implementations require less hacks, but this blog post will This is the issue - Your LoRA is SD1. seem to centre around A1111 which from the . 7>, <lora:transformerstyle:0. Are there madlads out here working on a LoRA mask extension for ComfyUI? That sort of extension exists for Auto1111 (simply called LoRA Mask), and it is the one last thing I'm missing between the two UIs. , set your lora loader to allow strength input, and just direct that type of scheduling You signed in with another tab or window. 9K subscribers in the comfyui community. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. As far as I know, (so correct me if there are any good quality of life Lora loader nodes out there, I'd love to know too). I load the models fine and connect the proper nodes, and they work, but I'm not sure how to use them properly to mimic other webuis behavior. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper Has anyone gotten a good simple ComfyUI workflow for 1. In practice, both are usually highly correlated, but there Lora usage is confusing in ComfyUI. But it does not work on LoRA. You signed out in another tab or window. 2 change in weight, so I can compare them and choose the best one, I use Efficiency Nodes for ComfyUI Version 2. 5 would start at 50% of the sampling process; end_percent: when to end the lora scheduling. just weight strength and the amount of loras, as you already noted. Simply adding detail to existing crude structures is the easiest and I mostly only use LORA. 0> which is calling it for that particular image with it's standard strength applied. dose this "<lora:easynegative:1. Just keep all other parameter fixed and only change the lora order. a) make it easy for semi-casual users (e. I want to test some basic lora weight comparisons, like in WebUI where you do XYZ plot. Attached is a simple workflow of my experiment. Make sure you add the Lora trigger word to your prompt. But what do I do with the model? The positive has a Lora loader. 8. Feed the model and clip from your checkpoint loader into a lora loader and then take the model and clip from there to the rest of the workflow. I have tried sending the float output values from scheduler nodes into the As with lots of things in ComfyUI there are multiple ways to do this. this prompt was 'woman, blonde hair, leather jacket, blue jeans, white t-shirt'. Also, I'm not sure of sd_xl_offset_example-lora_1. A little late to this post, but I have the solution for Automatic1111 users. Bonus points if you have an idea how to do it with ComfyUI, since that's what I'm currently using. If you have a Pikachu LoRA and a Agumon LoRA for example, write the trigger words in the relevant cases. Since I've 'got' you here and we're on the subject, I'd like your take on a small matter: For the LoRA I prefer to use one that focuses on lineart and sketches, set to near full strength. swbp ldwvrh ahfp fxg kjfdqg igp dvge rsoro zpou xqcue