Very basic Forge AI Guide – Episode 2 – img2img & inpainting

Story-heavy comics with erotic elements in a magic setting, including magical artifacts, spellcasting, superpowers, and physical transformations.

Very basic Forge AI Guide – Episode 2 – img2img & inpainting

Hello, Hexxet here,

We continue where we left off last time. We’ve got Forge so far that you are able to run a model with a text prompt, but some details or quality of the resulting image might not be too great. For that purpose, we’ll now have a look at inpainting/img2img.

Once you have created an image with txt2img you can use the buttons under that image to (from left to right):

  • 1.) open the folder where Forge stores these created images
  • 2.) save that image to the webui/log/images folder
  • 3.) save the images (if more were created) as a zip to the folder above
  • 4.) send the image to img2img generation tab
  • 5.) send the image to inpainting tab
  • 6.) send image to the extra tab
  • 7.) Create an upscaled version of your image using the Highres Fix options

img2img Generation

Let’s have a look at img2img generation for now. Send the image you have created to img2img and Forge will switch to the corresponding tab displaying your image. img2img works exactly like the txt2img tab but it will take the starting image into account and apply your prompt to that image.

Most of the parameters in the img2img tab are the same as in the txt2img tab but there are some things to consider.

1.) Usually you want to set the Sampling Method/Schedule Type to the same one you have used in your Text2Img generation so you stay in the same style as the original image.

2.) Since you want to generate different results and might want to try several times in succession to repaint your image, you will want to set the seed to -1. (Which means random seed. So every time you click generate a new seed is used. Usually, the same seed generates about the same image)

3.) Denoising Strength: This parameter is new and specifies how far from the original image the AI is allowed to go. 0,75 seems to be a good start for this. If you go too high a completely new image will be generated and if you go too low you won’t notice any changes.

When you have set your parameters just press Generate again and the AI will overhaul your image.

Why use img2img?

a.) You can overhaul a previously generated image by trying to increase its quality by:

  • simply redoing the same image and hope it gets better 🙂
  • adding additional parameters to the prompt that might improve the quality (like “best quality”, or a style lora)
  • Increase the resolution by scaling up width and height stats

In general, it’s a good approach to generate smaller images with txt2img first until you have one you like and then scale that one up f.e. with img2img if you want for example 2048×2048 images. (Note there is a different way to do this as well. It’s called Hires. fix in the Txt2img Tab and in my opinion creates even better results in terms of upscaling but this is out of the scope of this chapter.)

b.) change the image as a whole into a direction. This works decently well with some transformations like:

  • make breasts bigger/smaller
  • age-up/down characters
  • change clothes to some degree

You can achieve this by simply changing your prompt or adding/removing parameters. So, if we created a MILF with decent-sized breasts previously, lets remove MILF and write “girl” instead and let’s add “(gigantic tits):1.2, bikini”.

The resulting image should be a younger woman with bigger breasts. (Note that I did add “bikini” because I had that word not in my original prompt and the word “tits” usually leads to nude/topless pics if no clothes are specified.)

Please note that the AI takes your input pic into account. The Denoising Strength specifies how far away from the original pic you wanna go. So, for example the woman in the picture will still be about the same height/size. She might be a bit thinner, more toned, and have a younger face, but you won’t be able to shrink her down for example.

c.) You can use img2img also to start from images you created via a 3D Software or got somewhere else and start from them. Like what I tried with my AI reworks of “Dylan and the Th IP³ Chapters 1-3”. In order to send a custom image from your computer to img2img just drag and drop it into the img2img Generation area.

Inpainting

But most of the time (I guess) we don’t want to change the whole image but only parts of the image. How can we do that? That’s what the inpaint tab is for. Either from the txt2img or from the img2img tab you can send images to the inpaint tab.

Why use inpaint?

Sometimes/often there are artifacts in your image that you don’t want there… or we want them in a different way but we want to keep the rest of the image. With inpaint we can mark the area we want to change and then press “Generate”. The AI will now only rework the marked area and, with some luck, change it so it gets better :).

There are some parameters and strategies to inpainting that can have a huge impact on the result. First of all, set the Sampling/Scheduling Method to whatever you used for the image so far, otherwise the unpainted area will most likely look different from the rest of the image.

Inpaint whole image vs Inpaint marked area only

Let’s send the image of our younger woman with the giant tits to the inpaint tab. we replace “bikini” with “red bikini” and “(giant tits):1.2” with “(small tits):1.2”. Also, we inpaint the breast/bikini top area in our image. (To do that click-drag the mouse on the image in the inpaint area over the parts you want to mark.

Once marked, click “generate”.

Here is the result with “whole picture” active:

And here the one with “Only Masked”:

They look slightly different, but in general both did what we wanted. So what’s the difference?

“Whole Image” takes the whole image into account when inpainting. This gives the AI more information about the image and usually leads to more fitting results. With “Only Masked” it is easier for the inpainting area to distort and look obviously inpainted. However “Only Masked” has one huge advantage, it can generate better details.

For example, let’s say we inpaint an area that is about 50×50 pixel big. “Whole image” will have 50×50 pixel available to repaint that area -> Details/Resolution in that area will not go up. If however, we use “Only Masked” the generation has the full f.e. 1024×1024 pixels for details available -> increasing resolution in that area dramatically. The easiest example to show this is fixing possible face distortions. Please note that with the above images I did cheat by having aDetailer checked on. Without aDetailer, our woman would have looked more like this:

As you can see, the face does not look very attractive because it’s clearly pixilated. Let’s send this image to the inpaint tab and set it to “Masked Only”, inpaint the face on that pic and press generate:

As you can see the face is now much more detailed than before. (And this is actually what aDetailer does in an automated way. I plan to dedicate a later chapter to aDetailer though).

So, as you can see, if you want to improve the quality of a certain area of an image use “Masked Only”.

But just improving an area is not the only usage of inpaint “Masked Only”. We can also use it to add artifacts to an image. For example, see those two loungers in the background. Let’s make a table out of them!

We inpaint the loungers and add “table” to our prompt and press generate!

And tada! The loungers are gone instead there is a table. If you happen to also see a girl in a red bikini then you forgot to remove the part about the bikini-clad woman in the prompt!^^

Inpaint “Masked Only” will only consider the unpainted area and will try to conform that area as much as possible to your prompt. So you might need to use different prompts for different areas.

Really cool Tip here

Note, that there is a cool trick you can use when using this method that might help you to not change the prompt at all. You can inpaint a very small dot for example to the left under our woman’s breasts. A small dot won’t change much on its own anyway, but the AI will now consider the square you get by connecting that dot with the unpainted loungers area. (So the area containing the upper torso of the women and the lounger area). The method will still only inpaint the marked lounger area, but it will now take into consideration everything in that area, and it will most likely realize that there already is a woman in a red bikini, so it hopefully will only draw the table 🙂

Tada, as you can see we got a table in the background without a bikini-clad girl… It’s still a pool table though cause the word pool interfered with the word table I guess XD. So, yeah, sometimes you won’t be able to get around changing your prompt. But in general, this trick can be very handy. (Many thanks to Biohazardry who told me about this.)

Please consider though: If the area gets too big (by using those dots) you might lose details again.

General inpainting Tips:

If there are different areas you want to change in your image. Inpaint one area at a time. It will be tricky to find one prompt that matches them all and if you use “Marked Only” you will lose details if you inpaint all at the same time.

Consider changing the denoising strength. How much to use will often depend on the model used and the CFG Scale set. With the example model, 0.75 worked pretty well for everything so far but usually less might be better for more subtle changes. Usually 0.4 is good to use for inpainting faces. Genitals, nipples and hands (small areas) might go even lower than that. Also, there are some models that use very low CFG Scales. If the CFG Scale goes below 2, even very low Diffusion Strength can lead to complete distortions. You might go as low as 0.2 in those cases.

If you have a different experience with Img2Img or inpainting or have some cool other tips, let me know! 🙂

Leave a Reply