-
Notifications
You must be signed in to change notification settings - Fork 44
SD Dream node parameters
Output file name is a template for the filename. You don't have to add a file extension here.
Append name will add a number to the filename based on the task and batch number. You can also use @pdg_index
in backticks in your filename, but this will not use the batch number, so it's better to use Append name.
Save filename to attribute will save the name of the file with all the appended numbers to an attribute. Later you can use it in any node by adding it in backticks like this: @filename
.
Seed is 64-bit number that defines the image generation process. Basically if you use the same seed with the same parameters and prompt, you should get the same result. Put -1 here to randomize seed each time.
Will add one to the seed for each iteration (for both tasks and batches), so all your images will have a different seed value.
You can set the image resolution by tweaking Width and Height or you can tick the "Upstream image resolution" checkbox to use the resolution of incoming image.
You can set "Prompt Source" to "Custom". This way the node will read prompts from the Prompts foldout. Or you can set it to "Upstream attribute" to use prompts from upstream nodes.
It's a "Batch size" parameter from Automatic1111. If you put 4 here, it will try to generate 4 images simultaneously with a unique seed for each. The possible amount of batches depends on your GPU memory and image resolution. All the generated batches will be turned into tasks at the output of the node.
You can switch a model to a particular one right on this node, but I'd suggest to leave it as "Current model" and switch models with SD Switch Model instead.
Is an algorithm of finding the right spot on a treasure map (see the Prompting Basics). There are a lot of different options available here. Which one to use is a theme for a heated discussion on Reddit. I tend to stick to Euler A for static images.
Is how strongly Stable Diffusion will try to match your prompt. Roughly speaking it's a mix value between "No prompt at all" and "Only prompt" generations. Usually it should be in 7-15 range, but in some cases like when you use a particular Lora or Alternative image2image test you would like to lower this value.
How many iterations it would take to generate your image. The number depends on the Sampler. For Euler A 20 steps is enough. For some you should increase the number.
Will try to find a face on your generated image and restore it with CodeFormer or GFPGAN networks. You can choose which one to use in the Automatic1111 settings tab. It only works when the face is vertical and the style is photorealistic. I don't use it often, as good models will render good faces without this option.
Will try to create a tileable texture.
This tab will appear in Image2Image mode.
It influences how much your initial image will be changed. 0 - not changed at all. 1 - changed completely. For minor fixes use low numbers here.
You can choose an upstream image here (for example from another generation or from a File Pattern node) or a custom file on disk.
See How to use Inpainting: https://github.com/stassius/StableHoudini/wiki/How-to-use-inpainting
See How to use ControlNet: https://github.com/stassius/StableHoudini/wiki/How-to-use-ControlNet
Lets you choose URL this node works with. When it turned off, it will use the default value from the /hda/Config/Config.ini file
Here you can send your generated image to an external program as a command line argument.