Skip to content

centuryglass/IntraPaint

 
 

Repository files navigation

IntraPaint_banner.jpg

IntraPaint is a free and open-source image editor for desktop PCs. It combines standard digital painting and image editing tools with advanced AI-based features, including image generation and inpainting. IntraPaint bridges the gap between traditional digital editing and AI-driven workflows, combining the precision of manual techniques with the efficiency of AI tools for a more seamless creative process. IntraPaint is available for Windows and Linux. macOS support is possible with manual setup and compilation.


Table of Contents:

  1. Getting Started
  2. Key Features
  3. Use Cases
  4. Examples
  5. Installation
  6. AI Setup (Stable Diffusion)
  7. Guides and Tutorials
  8. FAQ
  9. Alternatives

Getting Started with IntraPaint:

example-1.png example-2.png
1. Draw and paint with conventional tools 2. Select areas for AI editing and provide a prompt
example-3.png example-4.png
3. Choose from generated options 4. Refine and repeat as needed.

Key Features:

AI image generation features:

  • Integrates with Stable Diffusion, running locally or remotely (via API) using Forge WebUI or Automatic1111 WebUI.
  • Supports text-to-image, image-to-image, and inpainting, allowing users to generate new images, refine existing ones, or apply specific edits using natural language prompts.
  • Advanced AI guidance through ControlNet modules, enabling features like depth mapping, recoloring, pose replication, and more.
  • AI upscaling via Stable Diffusion + ControlNet or other specialized models.

Traditional Image Editing Tools:

  • Full-featured layer stack with advanced blending modes, layer groups, and transformations.
  • Digital painting capabilities powered by the libmypaint brush engine, with full support for pressure-sensitive drawing tablets.
  • Standard tools such as selection, text editing, paint bucket, filters, and more are available even when AI features are disabled.

Why Combine AI and Traditional Tools?

  • Automate repetitive tasks: Focus on drawing what matters to you, and let the AI handle the rest.
  • Explore creative alternatives: Quickly iterate on multiple ideas without manually creating each variation.
  • Complete control in one package: Use familiar editing tools without needing to export files to another application.
Quickly refine sketches Clean up and color line art Experiment with different styles.
An extremely rough sketch of a person walking down a road under a sunset on the left, a more detailed AI-refined version on the right. A black and white stylized drawing of a dense city, animated to transition to a colored version of the same image Animation showing a simple drawing alternating between many different styles
Prompt: "A small figure walking down a road under a massive cloudy sky, billowing clouds, sunset, fiery fields, god rays, sun" Prompt: "organic sprawling colorful city, ultra dense, gigantic twisted textured detailed realistic climbing vines, pareidolia, stained glass, blue sky, billowing clouds, runes and symbols" Prompt: "Stanford bunny, [text in image frame above]"

Control image generation visually by providing rough sketches and brief descriptions, letting Stable Diffusion handle the rest.

Prompt: "on the left, a red lizard with a yellow hat standing on a green pillar in an orange desert under a blue sky, looking at a silver robot in a cowboy hat on the right "

Fully AI generated image of a robot and a lizard in a desert. The image is oddly arranged. A simple human-made digital drawing of the lizard and robot. Details are rough, but the image is much more accurate to the prompt. An image combining the detail of the first example with the composition of the second.
AI-generated image showcasing polished details but with spatial confusion between the lizard and the robot, blending traits and causing visual confusion between subjects. A manually drawn scene, offering clear and precise subject placement with no confusion, but lacking the fine detail and polish of the AI-generated version. A hybrid approach, combining manual precision with AI enhancements for detailed, coherent subjects and a balanced result

Generate images with greater detail and precision by using guided inpainting to enhance specific areas.

Avoid size restrictions by generating images in segments Refine small details with guided inpainting Final results are dramatically higher quality.
Faded image of a city landscape with incongruous library elements, incomplete, visible within the IntraPaint UI. Close-up detail from the previous image, animated, poor quality distorted buildings become polished and smoothly rendered Image of the same city with incongruous library elements, now completed, displayed in the IntraPaint UI.

More examples:

All images below were created using IntraPaint:

IntraPaint example: ASCII Lair IntraPaint example: IsoLibrary IntraPaint example: interstate
IntraPaint example: Jungle Grid IntraPaint example: lifebulb
IntraPaint example: catseye IntraPaint example: Moonlighter IntraPaint example: Glitch Alley
IntraPaint example: Fungiwood IntraPaint example: radian
---

Installation and Setup:

Pre-packaged builds:

Pre-compiled versions for x86_64 Linux and Windows are available on the releases page.

No installation is required to use non-AI features. Just run the executable directly.

Install from GitHub:

On other systems, you will need to use git and python to install IntraPaint. Make sure you have git, python 3.11 or greater, and pip installed and added to your system's PATH variable. On some systems, you might need to change "python" to "python3" and "pip" to "pip3" in the commands below. Using a Python virtual environment is also recommended, but not required. Run the following commands to install and start IntraPaint:

git clone https://github.com/centuryglass/IntraPaint
cd IntraPaint
pip install -r requirements.txt
python IntraPaint.py

AI image generation setup (Stable Diffusion):

To use AI features, you'll need a running Stable Diffusion client (either ComfyUI, Forge WebUI or the Automatic1111 WebUI) with API access enabled. The simplest method for getting this is through. Stability Matrix.

  1. Download and install Stability Matrix for your system:
  2. Open Stability Matrix, click "Add Package", select "Stable Diffusion WebUI reForge", "Stable Diffusion WebUI Forge", "Stable Diffusion WebUI", or "ComfyUI" and wait for it to install.
  3. If you chose any of the WebUI options, click the gear icon next to package you just installed to the open launch options. Scroll to the bottom of the launch options, add --api to "Extra Launch Arguments", and click "Save"
  4. Click "Launch", and wait for the Stable Diffusion client to finish starting.
  5. Open IntraPaint, and it should automatically connect to the Stable Diffusion client. If IntraPaint is already running, open "Select image generator" under the Image menu, select the option that matches the Stable Diffusion client you chose, and click "Activate".

More detailed instructions and a comparison of the available options can be found on the Stable Diffusion setup guide.

Guides and tutorials

IntraPaint Documentation:

Related Resources

These are third-party resources that I've found useful, I did not create or contribute to them.

FAQ:

Q: Why isn't the brush tool visible?

On systems other than 64-bit Linux and Windows, the brush tool may not work, because you will need a system-specific build of the libmypaint brush engine library. If pre-packaged libmypaint versions can't be used, IntraPaint will try to load libmypaint libraries from another directory, configurable in IntraPaint's settings under the "system" category. If you need help locating libmypaint files for a particular system, open a GitHub issue, and I'll try to help.

Q: Where are the ControlNet options?

The ControlNet panel only appears if the WebUI backend has a working ControlNet extension installed with API support, and ControlNet models have been downloaded.

If using WebUI Forge: The most recent version of the Forge WebUI (as of 8/30/24) does not have a working ControlNet API. Use Git to revert to commit a9e0c38, or install the v0.0.17 release.

If using the Automatic1111 WebUI: The ControlNet panel only appears after you've installed the sd-webui-controlnet extension. Installation instructions and links to required ControlNet model files can be found here.

If using ComfyUI: You shouldn't need to do anything besides downloading ControlNet model files.

Getting ControlNet models: The Stable Diffusion WebUI ControlNet extension GitHub wiki provides a helpful set of download links. If using Stability Matrix, ControlNet models should go in the Models/ControlNet folder within the Stability Matrix data folder. If you directly installed Stable Diffusion WebUI or Stable Diffusion WebUI Forge, ControlNet models should go in the models/ControlNet folder within the WebUI folder. If you directly installed ComfyUI, put ControlNet models in ComfyUI's models/controlnet folder.

Q: Wasn't there something else here before?

Version 0.1.0 of IntraPaint, released July 2022, was a much simpler program that used the old GLID-3-XL AI image generation model for inpainting. You can still access that version at https://github.com/centuryglass/IntraPaint/tree/0.1-release

Alternatives:

About

Combine digital painting with AI image generation.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 58.5%
  • Jupyter Notebook 41.4%
  • Shell 0.1%