LiteImage
Local AI image generation with A/B model comparison, HuggingFace Hub browser, chainable pipelines, face swap, and video generation.
Overview
LiteImage is a local AI image generation app that lets you create images entirely on your own hardware. It supports multiple model architectures including Flux, SDXL, and Stable Diffusion, and extends into video generation. The Electron frontend communicates with a Python backend (sd-cli.exe) to handle model loading and inference.
Features
- A/B model comparison with 3 view modes — split, overlay, and toggle — to evaluate outputs from different models or settings side by side
- HuggingFace Hub browser for one-click model downloads directly from within the app
- Chainable generation pipeline — compose multi-step workflows (e.g., txt2img → upscale → face fix) and batch matrix to sweep across parameter combinations
- Face swap with automatic facial landmark detection
- Text-to-image generation with multiple model architectures (Flux, SDXL, Stable Diffusion)
- Image-to-image transformation
- Video generation from text prompts or source images
- LoRA and ControlNet support
- Queue management for batch generation
Installation
LiteImage is installed via the LiteAISuite installer. Select LiteImage during the install step. The installer will set up the Python backend (sd-cli.exe) and required dependencies automatically.
Python 3.10 or higher must be available on your system before running the installer.
Usage
- Launch LiteImage from the LiteAISuite launcher or your desktop shortcut.
- Select a model from the model dropdown. On first use, models are downloaded to your local models directory.
- Enter a prompt in the text field and adjust generation settings (steps, CFG scale, resolution, seed).
- Click Generate to queue the job. Progress is shown in the queue panel.
- For image-to-image, switch to the Img2Img tab, upload a source image, and set the denoising strength.
- For video generation, switch to the Video tab and configure frame count and frame rate.
Generated images are saved to the output directory configured in Settings.
Configuration
Settings are accessible from the gear icon in the top toolbar.
| Setting | Description |
|---|---|
| Models Directory | Path where model files are stored |
| Output Directory | Path where generated images are saved |
| Default Model | Model loaded on startup |
| VRAM Limit | Cap VRAM usage for systems with less memory |
| Backend Path | Path to sd-cli.exe if not auto-detected |
LoRA files should be placed in the loras/ subfolder of your models directory. ControlNet models go in controlnet/.
Requirements
- Python 3.10 or higher
- NVIDIA GPU with 8+ GB VRAM recommended
- 12+ GB VRAM required for Flux models
- Sufficient disk space for models (SDXL ~6 GB, Flux ~12–24 GB per model)
LiteImage can run on CPU but generation will be significantly slower. SDXL and Flux models are not practical without a GPU.
Troubleshooting
Backend fails to start / "sd-cli.exe not found"
Verify the backend path in Settings points to the correct sd-cli.exe location. Re-running the installer can restore missing backend files.
Out of memory / CUDA OOM error Reduce the output resolution, lower the batch size to 1, or enable the VRAM Limit setting. Switching to a smaller model (e.g., SD 1.5 instead of SDXL) also reduces VRAM usage.
Model downloads stall or fail Check your internet connection. Models are large files — if a download is interrupted, delete the partial file from the models directory and restart LiteImage to retry.
