Compare commits

...

15 Commits
0.5.0 ... main

Author SHA1 Message Date
3e379e32cd Bump version: 0.5.5 → 0.5.6 2025-11-02 10:24:26 +00:00
c3eaf7f639 Fix flux workflow 2025-11-02 10:24:17 +00:00
c27b2c8aba Bump version: 0.5.4 → 0.5.5 2025-10-31 21:54:24 +00:00
86efefb490 refactor(libs): update imports to use extracted build_user_content 2025-10-31 21:54:21 +00:00
b7a2516dd4 Bump version: 0.5.3 → 0.5.4 2025-10-31 16:16:51 +00:00
9e3731bfdc feat(config): add automatic config file creation from sample
- Modify load_config function to copy user_config.cfg from user_config.cfg.sample if it doesn't exist
- Update README.md to reflect new features and configuration options, including web-based settings and job queue management
2025-10-31 16:15:42 +00:00
854732b1c2 Merge branch 'secondtopic' 2025-10-31 16:08:41 +00:00
6172fb4f73 refactor(libs): extract build_user_content function to avoid duplication
Move the logic for building user content, including topic selection and recent prompts avoidance, into a shared function in generic.py. Update openrouter.py and openwebui.py to use this new function instead of duplicating code.
2025-10-31 16:08:12 +00:00
6a6802472c
Merge pull request #1 from karl0ss/secondtopic
Secondtopic
2025-10-31 15:32:03 +00:00
aa7092a7ed Bump version: 0.5.2 → 0.5.3 2025-10-31 15:31:20 +00:00
435b687585 20 words 2025-10-31 15:31:13 +00:00
0707b031f9 Bump version: 0.5.1 → 0.5.2 2025-10-31 15:20:24 +00:00
79106c3104 Bump version: 0.5.0 → 0.5.1 2025-10-31 15:20:17 +00:00
d8b8f14ba4 up wored to 20 2025-10-31 15:20:06 +00:00
e8ec30fc73 Secondary topic and config updates 2025-10-31 15:14:19 +00:00
8 changed files with 116 additions and 78 deletions

View File

@ -1,5 +1,5 @@
[tool.bumpversion]
current_version = "0.5.0"
current_version = "0.5.6"
parse = "(?P<major>\\d+)\\.(?P<minor>\\d+)\\.(?P<patch>\\d+)"
serialize = ["{major}.{minor}.{patch}"]
replace = "{new_version}"

View File

@ -4,7 +4,7 @@ FROM python:3.11-slim
# Set the working directory in the container
WORKDIR /app
# Set version label
ARG VERSION="0.5.0"
ARG VERSION="0.5.6"
LABEL version=$VERSION
# Copy project files into the container

View File

@ -4,12 +4,19 @@ This project is a Flask-based web server designed to generate and display images
## Features
* **Web Interface:** A simple web interface to view generated images.
* **Image Generation:** Integrates with ComfyUI to generate images based on given prompts and models.
* **Web Interface:** A simple web interface to view generated images, manage favourites, and monitor job queues.
* **Image Generation:** Integrates with ComfyUI to generate images using SDXL, FLUX, and Qwen models based on given prompts.
* **Prompt Generation:** Automatic prompt generation using OpenWebUI or OpenRouter APIs with topic-based theming.
* **Scheduled Generation:** Automatically generates new images at a configurable time.
* **Favourites System:** Mark and manage favourite images.
* **Job Queue Management:** View and cancel running/pending image generation jobs.
* **Thumbnail Generation:** Automatic thumbnail creation for generated images.
* **Prompt Logging:** Maintains a log of recent prompts to avoid repetition.
* **Settings Management:** Web-based configuration editor for all settings.
* **Docker Support:** Comes with a `Dockerfile` and `docker-compose.yml` for easy setup and deployment.
* **Configurable:** Most options can be configured through a `user_config.cfg` file.
* **Configurable:** Most options can be configured through a `user_config.cfg` file or web interface.
* **Authentication:** Optional password protection for image creation.
* **Version Management:** Uses bump-my-version for version tracking.
## Prerequisites
@ -33,8 +40,8 @@ This project is a Flask-based web server designed to generate and display images
```
3. **Configure the application:**
* Copy the `user_config.cfg.sample` to `user_config.cfg`.
* Edit `user_config.cfg` with your settings. See the [Configuration](#configuration) section for more details.
* The `user_config.cfg` file will be automatically created from `user_config.cfg.sample` on first run if it doesn't exist.
* Edit `user_config.cfg` with your settings, or use the web-based settings page accessible by clicking the version number in the bottom right corner of the home page. See the [Configuration](#configuration) section for more details.
4. **Run the application:**
```bash
@ -51,8 +58,8 @@ This project is a Flask-based web server designed to generate and display images
```
2. **Configure the application:**
* Copy the `user_config.cfg.sample` to `user_config.cfg`.
* Edit `user_config.cfg` with your settings. The `comfyui_url` should be the address of your ComfyUI instance, accessible from within the Docker network (e.g., `http://host.docker.internal:8188` or your server's IP).
* The `user_config.cfg` file will be automatically created from `user_config.cfg.sample` on first run if it doesn't exist.
* Edit `user_config.cfg` with your settings, or use the web-based settings page accessible by clicking the version number in the bottom right corner of any page. The `comfyui_url` should be the address of your ComfyUI instance, accessible from within the Docker network (e.g., `http://host.docker.internal:8188` or your server's IP).
3. **Build and run with Docker Compose:**
```bash
@ -79,17 +86,27 @@ The application is configured via the `user_config.cfg` file.
| `[comfyui]` | `width` | The width of the generated image. | `1568` |
| `[comfyui]` | `height` | The height of the generated image. | `672` |
| `[comfyui]` | `topics` | A comma-separated list of topics to generate prompts from. | |
| `[comfyui]` | `FLUX` | Enable FLUX models (`True`/`False`). | `False` |
| `[comfyui]` | `ONLY_FLUX` | Only use FLUX models (`True`/`False`). | `False` |
| `[comfyui]` | `secondary_topic` | A secondary topic for prompt generation. | |
| `[comfyui]` | `flux` | Enable FLUX models (`True`/`False`). | `False` |
| `[comfyui]` | `qwen` | Enable Qwen models (`True`/`False`). | `False` |
| `[comfyui:flux]` | `models` | A comma-separated list of FLUX models. | `flux1-dev-Q4_0.gguf,flux1-schnell-Q4_0.gguf` |
| `[comfyui:qwen]` | `models` | A comma-separated list of Qwen models. | `qwen-image-Q4_K_S.gguf, qwen-image-Q2_K.gguf` |
| `[openwebui]` | `base_url` | The base URL for OpenWebUI. | `https://openwebui` |
| `[openwebui]` | `api_key` | The API key for OpenWebUI. | `sk-` |
| `[openwebui]` | `models` | A comma-separated list of models for OpenWebUI. | `llama3:latest,cogito:14b,gemma3:12b` |
| `[openrouter]` | `enabled` | Enable OpenRouter integration (`True`/`False`). | `False` |
| `[openrouter]` | `api_key` | The API key for OpenRouter. | |
| `[openrouter]` | `models` | A comma-separated list of models for OpenRouter. | `mistralai/mistral-7b-instruct:free,google/gemma-7b-it:free,meta-llama/llama-3.1-8b-instruct:free` |
| `[openrouter]` | `list_all_free_models` | List all free models (`True`/`False`). | `False` |
## Usage
* **Gallery:** Open your browser to `http://<server_ip>:<port>` to see the gallery of generated images.
* **Create Image:** Navigate to `/create` to manually trigger image generation.
* **Create Image:** Navigate to `/create` or `/create_image` to manually trigger image generation with various model options.
* **Job Queue:** Monitor and cancel running/pending jobs via the gallery interface.
* **API Endpoints:**
* `/api/queue` - Get current job queue details (JSON)
* `/cancel` - Cancel the current running job
## Dependencies
@ -97,6 +114,12 @@ The application is configured via the `user_config.cfg` file.
* comfy_api_simplified
* APScheduler
* Pillow
* tenacity
* nest_asyncio
* openai
* websockets
* bump-my-version
* openwebui-chat-client
* And others, see `requirements.txt`.
## Contributing

View File

@ -5,6 +5,7 @@ import sys
import time
import os
import random
import shutil
from PIL import Image
import nest_asyncio
import json
@ -38,10 +39,21 @@ def save_prompt(prompt):
def load_config() -> configparser.ConfigParser:
"""Loads user configuration from ./user_config.cfg."""
"""Loads user configuration from ./user_config.cfg. If it doesn't exist, copies from user_config.cfg.sample."""
user_config = configparser.ConfigParser()
config_path = "./user_config.cfg"
sample_path = "./user_config.cfg.sample"
if not os.path.exists(config_path):
if os.path.exists(sample_path):
shutil.copy(sample_path, config_path)
logging.info("Configuration file copied from sample.")
else:
logging.error("Neither user_config.cfg nor user_config.cfg.sample found.")
sys.exit(1)
try:
user_config.read("./user_config.cfg")
user_config.read(config_path)
logging.debug("Configuration loaded successfully.")
return user_config
except KeyError as e:
@ -187,6 +199,45 @@ def load_prompt_models_from_config():
return prompt_models
def build_user_content(topic: str = "random") -> str:
"""Build the user content string for prompt generation, including topic instructions and recent prompts avoidance."""
config = load_config()
topic_instruction = ""
selected_topic = ""
secondary_topic_instruction = ""
# Unique list of recent prompts
recent_prompts = list(set(load_recent_prompts()))
if topic == "random":
topics = [t.strip() for t in config["comfyui"]["topics"].split(",") if t.strip()]
selected_topic = random.choice(topics) if topics else ""
elif topic != "":
selected_topic = topic
else:
# Decide on whether to include a topic (e.g., 30% chance to include)
topics = [t.strip() for t in config["comfyui"]["topics"].split(",") if t.strip()]
if random.random() < 0.3 and topics:
selected_topic = random.choice(topics)
if selected_topic != "":
topic_instruction = f" Incorporate the theme of '{selected_topic}' into the new prompt."
# Add secondary topic if configured and not empty
secondary_topic = config["comfyui"].get("secondary_topic", "").strip()
if secondary_topic:
secondary_topic_instruction = f" Additionally incorporate the theme of '{secondary_topic}' into the new prompt."
user_content = (
"Can you generate me a really random image idea, Do not exceed 20 words. Use clear language, not poetic metaphors."
+ topic_instruction
+ secondary_topic_instruction
+ "Avoid prompts similar to the following:"
+ "\n".join(f"{i+1}. {p}" for i, p in enumerate(recent_prompts))
)
return user_content
def create_prompt_with_random_model(base_prompt: str, topic: str = "random"):
"""Create a prompt using a randomly selected model from OpenWebUI or OpenRouter.

View File

@ -2,7 +2,7 @@ import random
import logging
from openai import OpenAI, RateLimitError
import nest_asyncio
from libs.generic import load_recent_prompts, load_config
from libs.generic import load_recent_prompts, load_config, build_user_content
from libs.openwebui import create_prompt_on_openwebui
import re
nest_asyncio.apply()
@ -33,34 +33,14 @@ def get_free_models():
def create_prompt_on_openrouter(prompt: str, topic: str = "random", model: str = None) -> str:
"""Sends prompt to OpenRouter and returns the generated response."""
# Reload config to get latest values
config = load_config()
# Check if OpenRouter is enabled
if user_config["openrouter"].get("enabled", "False").lower() != "true":
if config["openrouter"].get("enabled", "False").lower() != "true":
logging.warning("OpenRouter is not enabled in the configuration.")
return ""
topic_instruction = ""
selected_topic = ""
# Unique list of recent prompts
recent_prompts = list(set(load_recent_prompts()))
if topic == "random":
topics = [t.strip() for t in user_config["comfyui"]["topics"].split(",") if t.strip()]
selected_topic = random.choice(topics) if topics else ""
elif topic != "":
selected_topic = topic
else:
# Decide on whether to include a topic (e.g., 30% chance to include)
topics = [t.strip() for t in user_config["comfyui"]["topics"].split(",") if t.strip()]
if random.random() < 0.3 and topics:
selected_topic = random.choice(topics)
if selected_topic != "":
topic_instruction = f" Incorporate the theme of '{selected_topic}' into the new prompt."
user_content = (
"Can you generate me a really random image idea, Do not exceed 10 words. Use clear language, not poetic metaphors."
+ topic_instruction
+ "Avoid prompts similar to the following:"
+ "\n".join(f"{i+1}. {p}" for i, p in enumerate(recent_prompts))
)
user_content = build_user_content(topic)
# Load configured models
configured_models = [m.strip() for m in user_config["openrouter"]["models"].split(",") if m.strip()]

View File

@ -1,7 +1,7 @@
import random
import logging
import nest_asyncio
from libs.generic import load_recent_prompts, load_config
from libs.generic import load_recent_prompts, load_config, build_user_content
import re
from openwebui_chat_client import OpenWebUIClient
from datetime import datetime
@ -17,29 +17,9 @@ output_folder = user_config["comfyui"]["output_dir"]
def create_prompt_on_openwebui(prompt: str, topic: str = "random", model: str = None) -> str:
"""Sends prompt to OpenWebui and returns the generated response."""
topic_instruction = ""
selected_topic = ""
# Unique list of recent prompts
recent_prompts = list(set(load_recent_prompts()))
if topic == "random":
topics = [t.strip() for t in user_config["comfyui"]["topics"].split(",") if t.strip()]
selected_topic = random.choice(topics)
elif topic != "":
selected_topic = topic
else:
# Decide on whether to include a topic (e.g., 30% chance to include)
topics = [t.strip() for t in user_config["comfyui"]["topics"].split(",") if t.strip()]
if random.random() < 0.3 and topics:
selected_topic = random.choice(topics)
if selected_topic != "":
topic_instruction = f" Incorporate the theme of '{selected_topic}' into the new prompt."
user_content = (
"Can you generate me a really random image idea, Do not exceed 10 words. Use clear language, not poetic metaphors."
+ topic_instruction
+ "Avoid prompts similar to the following:"
+ "\n".join(f"{i+1}. {p}" for i, p in enumerate(recent_prompts))
)
# Reload config to get latest values
config = load_config()
user_content = build_user_content(topic)
if model:
# Use the specified model

View File

@ -14,10 +14,14 @@ prompt = "Generate a random detailed prompt for stable diffusion."
width = 1568
height = 672
topics =
secondary_topic =
FLUX = False
ONLY_FLUX = False
flux = False
qwen = False
only_flux = False
[comfyui:qwen]
models = qwen-image-Q4_K_S.gguf, qwen-image-Q2_K.gguf
[comfyui:flux]
models = flux1-dev-Q4_0.gguf,flux1-schnell-Q4_0.gguf

View File

@ -167,7 +167,7 @@
"38": {
"inputs": {
"unet_name": "flux1-dev-Q4_0.gguf",
"device": "cuda:1",
"device": "cuda:0",
"virtual_vram_gb": 0,
"use_other_vram": true,
"expert_mode_allocations": ""