Very basic Forge AI Guide – Episode 1

Story-heavy comics with erotic elements in a magic setting, including magical artifacts, spellcasting, superpowers, and physical transformations.

Very basic Forge AI Guide – Episode 1

Hello, Hexxet here,

Some people have been asking me about how I do my AI stuff and I always answer that it’s easy and I do try to give tips on how to do it when asked. But so far I have not sat down to write this stuff down. So, if you are interested in generating your own AI pics in the way I do it this “Guide” might be for you. However please note that if you have already experience with local stable-diffusion creation there probably won’t be new info for you here – this is really very basic.

Requirements

To create AI locally there are great freely available tools out there. But without a certain hardware level, you can not use them. The most important is the graphics card. This has a direct impact on your creation speed. Also, if it lacks a certain level of VRAM I think you can not use it for AI image generation.

Personally I’m using a Geforce RTX3090 which was the highest standard 2-3 years ago I think. If you have one of those cause you have for example a gaming PC you are golden. But less might work as well. What I get from googling you should have at least 4GB VRAM on your graphics card. (12 would be better). PS: Do not confuse VRAM with normal RAM. VRAM is the RAM of your graphics card. Google your graphics card name if you don’t know. You can find your graphics card name under Windows “Device Manager”. Note: It’s said that NIVIDA cards work better than other cards.

Aside from that, you obviously need disc space to install the AI tools and models, I’d say you should at least reserve 50GB for it, more is always better of course. And your PC should also have 16 normal RAM. If you want more detailed info about requirements I found this article here but also this one here.

Tool for Local AI generation – Stable Diffusion and Forge

I’m using Stable Diffusion to generate my AI Pics. To use stable diffusion you need an interface to use it… unless you are some AI Guru who does it by hand I guess. There are many tools that work with Stabel Diffusion and offer certain benefits. I am using Forge because its usage is very simple (in comparison to comfyUI) and it’s much faster than Automatic1111.

To get Forge go to: https://github.com/lllyasviel/stable-diffusion-webui-forge

Don’t let yourself be confused by GIT or anything that looks like code. Just scroll down to the headline “Installing Forge”:

Unless you have knowledge about GIT and Pytorch just go with the first fat link “One-Click Package” and download the zip. This might take a bit, it’s ~1.7GB big.

Once the download has finished unzip the package (You might need a program like WinRar or 7Zip to do that), and copy its content to the location you wanna run your AI from. (An SSD Disc, if you have, probably speeds up startup time). Before you run Forge now, you should execute the update.bat file -> this will open a command window that updates your folder to the newest version. Once that is complete you can start Forge by double-clicking the “run.bat” file. After start-up, Forge should open your browser and it should look like this:

There are quite a lot of options here. We will only deal with the most essentials now to create our first AI image.

Getting a model

Now, before you can start using this interface you need to get an actual AI model. There are other sites to get models from but I’m using https://civitai.com/. You need to register an account there, but it’s free. You can browse there for a model that suits your aesthetic taste but for our first generation I would just recommend one model I have used for some of my comics: solus-mix

Note: AI Models are quite large: ~6-7GB (for SDXL/Pony) so the download might take a bit.

PS: If you have a graphics card with low VRAM you might not be able to run SDXL/Pony but SD models might still work. But I do not have experience with those.

Once your download has finished navigate to your Forge installation folder and from there: webui > models >Stable-diffusion

Put the model you just downloaded (a file with the extension “.safetensors” into this folder and you are ready to go.

Quickstart:

  • 1 – select the model you have downloaded here. (You might have to press the blue update arrow next to the dropdown box if you started Forge before you added the model to the installation folder)
  • 2 write what you want to see in the Prompt Text box: f.e. “score_9, score_8_up, score_7_up, masterpiece, best quality, candid photo of a naked MILF laying by the pool” and use the Negative prompt box for: “bad quality, blurry, fat”
  • 4 set width to 1024 and height to 1024
  • 5 set CFG scale to 5
  • 6 set the sampling method to Euler A
  • 7 set the Schedule type to Automatic
  • 8 set sampling steps to 30
  • 3 Press this button!
  • Your computer should now start to work and generate your image. Once it is done it will appear in the image area at 9

Note: Your resulting image might not have the desired quality. Especially faces might not be cute enough. That is okay. This is the first step and I’ll hopefully talk about how to fix that in a later guide. But for now, if you want a quick fix just click the “ADetailer” checkbox a bit below the CFG scale. (I hope this works out of the box. I’m not sure if I had to install something extra for the ADetailer here or not ^^)

Longer Explanations:

1.) In the top left corner you can specify via that checkpoint combo box which model you want to use for image generation. Choose the model you’ve downloaded previously from civit.ai. (You might have to press the blue update arrow next to the dropdown box if you started Forge before you added the model to the installation folder)

2.) Make sure you are on the Txt2img tab. There are more tabs here, but we just want to write some text to tell the AI to generate an image from that so we use this tab for now. In the Prompt Textbox write what you want to see: f.e. “score_9, score_8_up, score_7_up, masterpiece, best quality, candid photo of a naked MILF laying by the pool”

Note the “score” values are needed for most pony models. (the solus model I recommended is a pony model). These values increase image quality. Masterpiece, and “best quality” work for all models to slightly increase quality I think.

You can use the Negative prompt to tell the AI to not show you stuff you write there: This is mostly used to increase image quality by writing “bad quality” or “blurry”, but it also works with stuff that turns our images into something we don’t want. For example write “fat, obese, thick” in the negative prompt if your MILF turns out too chubby for your taste.

3.) Press this button to start image generation, however, have a look at 4 to 8 first, please 🙂

4.) This is the resolution of the picture you are going to generate. If you are using an SD Model you wanna go with 512×512 here I think. If you use any other model like the one I stated above, you should go with 1024×1024 for your first generation. (Note this is for square image generation. Other formats usually supported by SDXL models are:

  • 1:1 (square): 1024×1024, 768×768
  • 3:2 (landscape): 1152×768
  • 2:3 (portrait): 768×1152
  • 4:3 (landscape): 1152×864
  • 3:4 (portrait): 864×1152
  • 16:9 (widescreen): 1360×768
  • 9:16 (tall): 768×1360

5.) CFG-Scale determines how much freedom you give the AI to generate your image. Please note that it very much depends on what model you use for your generation. A rule of thumb that I found out is: If you are generating Anime-like images 5-7 works very well. And if you use a hyper-realistic model you should go lower like 2-4. With some special hyperrealistic models going all the way down to 1. (Note, if you choose 1, no negative prompt the negative prompt will be grayed out as it will be ignored for generation).

6.) Sampling Method: This can have a huge impact on image generation in terms of quality as well as speed. My rule of thumb: If you are going for anime-like graphics choose “Euler a” here. It usually generates great results at great speed. And if you are going for more realistic stuff I use “DPS++ 2S a” because it usually generates the best quality. (But this is one of the slowest Sampling methods there are. So if you are having problems with speed you might want to switch to another one like “DPM++ 2M”). In general, it’s also a good rule of thumb to go with the specifications specified on the model’s page. (Most models reference a setting they’ve been tested with. Still, Euler A most of the time works with anything ^^). If you want to know more about what these do how please google it. While I have tried most of them out in image generation I have no clue what’s behind the tech :).

7.) Schedule Type: Automatic is a good start but I’ve noticed that “normal”, “simple” and “SGM Uniform” often tend to generate more beautiful images (though it is model dependent). Some models also work best with “Karras”. (Though note, my personal experience is, even if a model says it’s good with Karras one of the other options might still work better.

8.) The sampling steps determine how many iterations the AI use to generate your image. So it’s kinda like a quality/speed setting but increasing it indefinitely only increases duration and might even make quality worse for some settings. It is model-dependent but usually something between 20 and 40 works best. You probably wanna go try 30 first :).

9.) The resulting image will appear here.

Credits:

Many thanks to Biohazardry. He introduced me to this way of AI generation and he does some great hypno/TF generations himself.

Leave a Reply