TPL["vamdiffusion"] = `
photo_camera
acute
help
Positive terms to describe what the image should contain.
help
Negative terms to describe what the image should not contain.
help
This is the sampling steps from A1111. It's the number of iterations the AI will go over the image. With more steps the AI will add more details to the image, but this it will take more time to finish. Default value is 17. The default value in A1111 is 20. People usually use values between 15-30.
help
CFG value from A1111. Controls how big of an impact the prompt should have. Lower value lets the AI be more creative and follow what 'it knows', images will look a bit more hazy and desaturated and more realistic. Higher values will force the AI to try to satisfy the prompts more; images will look a bit more saturated and forced. Default value is 7.
help
Denoise value from A1111. Controls how closely the AI will follow the input (the screenshot from vam). On higher creativity values, the AI will ignore more the image, while on lower values it will try to follow it more closely. Default value is 0.4.
General
help
ControlNet mode uses extra algorithms when generating the image, calculating things like outlines,depth and poses from the input vam image and uses them as reference for the output generated image. This makes the resulting images better keep the position and layout of the original vam image. This is at the cost of speed though. For best performance use the default value: normal.
help
Width of the generated image.
help
Height of the generated image.
help
This is the path where the cam photos will be saved. It can't be changed.
AI Settings
help
When enabled A1111 will run an extra step to improve the faces in the image. The quality will be much better but it will take longer to finish the image. Works best for photos with normal faces. If the face is covered this might cause weird results. By default this is off.
help
The algorithm used by the AI when moving from step to step. Some focus on more continuity, others on reaching a good result faster. Explaining each is too complicated, google "stable diffusion sampling examples" for examples, it will make more sense. The default value is Euler a
help
When set to 0, the generated images will always be random even if everything sent to the AI is identical. If you set a value bigger than 0, the AI will always do that particular randomization, keeping things constant if you send the same thing. You can use this to test prompt words or settings to see how they influence the generated image. Default value is 0 (images will be random every time).
help
When enabled, Alive will save in the diffusion screenshot folder also the preview from VAM. By default this is on.
ControlNet
help
ControlNet is a method that allows forcing the generated images to look in specific way, for example people to have very specific poses and the photos to be taken from specific angles. There are two main steps, a preprocessor and the model itself. Preprocessor 'translates' input image to a controlnet friendly format. For some method that means it just finds the edges and you get kind of a black and white sketch of the input image (canny for example). For others it tries to detect actual people it generates a pose image which is like a stickman representing the person pose (openpose for examples). After it does this, the pose image is sent along to the normal image AI model and the controlnet model takes care of steering the AI in that direction. For vam purposes you need to match the preprocessor and processor, canny-canny, openpose-openpose, otherwise you'll get more abstract results.
help
This is the main ControlNet module used.These need to be installed inside A111, they're not enabled by default iirc. To install them google "controlnet automatic1111 install". Default value is canny.
help
Refresh
This is the ControlNet model used when generating the image. Must match a value available in your Automatic1111 ControlNet settings in the controlnet model dropdown. Use the custom model field below in case you want to use a model that is not in the list. Default value is control_v11p_sd15_canny [d14c016b].
help
This represent the weight of the ControlNet processing. Lower value will give the AI more freedom while higher values will force the AI to keep closer to the input image. Default value is 1.
help
ControlNet preprocessor low threshold value for models that support it like canny. Default value is 64.
help
ControlNet preprocessor high threshold value for models that support it like canny. Default value is 64.
help
ControlNet preprocessor resolution when processing the image.Higher will make the result follow the input image closer. Default value is 200.
help
When checked, controlnet will run in lowvram mode. This reduces the cost of GPU used and prevents running out of VRAM on larger images, but it increases the time it takes to generate the image. By default this is checked.
help
Changes what has more impact, the text prompt or the controlnet guidance when generating the image. Default is Balanced.
Reset settings
help
Press this to reset all AI Cam settings to default values.
Reset to defaults
LoRAs

To generate images using the AI modes of the Cam app you need to have Automatic1111 installed & running.


Guide video on YouTube: "ALIVE: Photorealistic screenshots with StableDiffusion"

`;