Compare commits

..

17 Commits

Author SHA1 Message Date
c27b2c8aba Bump version: 0.5.4 → 0.5.5 2025-10-31 21:54:24 +00:00
86efefb490 refactor(libs): update imports to use extracted build_user_content 2025-10-31 21:54:21 +00:00
b7a2516dd4 Bump version: 0.5.3 → 0.5.4 2025-10-31 16:16:51 +00:00
9e3731bfdc feat(config): add automatic config file creation from sample
- Modify load_config function to copy user_config.cfg from user_config.cfg.sample if it doesn't exist
- Update README.md to reflect new features and configuration options, including web-based settings and job queue management
2025-10-31 16:15:42 +00:00
854732b1c2 Merge branch 'secondtopic' 2025-10-31 16:08:41 +00:00
6172fb4f73 refactor(libs): extract build_user_content function to avoid duplication
Move the logic for building user content, including topic selection and recent prompts avoidance, into a shared function in generic.py. Update openrouter.py and openwebui.py to use this new function instead of duplicating code.
2025-10-31 16:08:12 +00:00
6a6802472c
Merge pull request #1 from karl0ss/secondtopic
Secondtopic
2025-10-31 15:32:03 +00:00
aa7092a7ed Bump version: 0.5.2 → 0.5.3 2025-10-31 15:31:20 +00:00
435b687585 20 words 2025-10-31 15:31:13 +00:00
0707b031f9 Bump version: 0.5.1 → 0.5.2 2025-10-31 15:20:24 +00:00
79106c3104 Bump version: 0.5.0 → 0.5.1 2025-10-31 15:20:17 +00:00
d8b8f14ba4 up wored to 20 2025-10-31 15:20:06 +00:00
e8ec30fc73 Secondary topic and config updates 2025-10-31 15:14:19 +00:00
7f599ac65a Bump version: 0.4.0 → 0.5.0 2025-10-30 17:16:20 +00:00
0613678d50 Bump version: 0.3.14 → 0.4.0 2025-10-30 17:16:05 +00:00
bb4adbff2c feat(openrouter): add support for listing and using free OpenRouter models
Add a new configuration option `list_all_free_models` to enable fetching and displaying free models from OpenRouter. Enhance model loading functions to include free models when enabled, and implement fallback logic in prompt generation to try alternative models if the primary one fails. Update the UI to display free models in a separate optgroup.
2025-10-30 17:15:59 +00:00
f15c83ebaa feat: add Qwen workflow support and enhance model validation
- Add extraction logic for Qwen workflow metadata in PNG files
- Improve OpenRouter model selection with validation and fallback to free/configured models
- Remove outdated Flux workflow file
- Update Qwen workflow configuration with new parameters and simplified structure
2025-10-30 16:38:52 +00:00
11 changed files with 386 additions and 651 deletions

View File

@ -1,5 +1,5 @@
[tool.bumpversion]
current_version = "0.3.14"
current_version = "0.5.5"
parse = "(?P<major>\\d+)\\.(?P<minor>\\d+)\\.(?P<patch>\\d+)"
serialize = ["{major}.{minor}.{patch}"]
replace = "{new_version}"

View File

@ -4,7 +4,7 @@ FROM python:3.11-slim
# Set the working directory in the container
WORKDIR /app
# Set version label
ARG VERSION="0.3.14"
ARG VERSION="0.5.5"
LABEL version=$VERSION
# Copy project files into the container

View File

@ -4,12 +4,19 @@ This project is a Flask-based web server designed to generate and display images
## Features
* **Web Interface:** A simple web interface to view generated images.
* **Image Generation:** Integrates with ComfyUI to generate images based on given prompts and models.
* **Web Interface:** A simple web interface to view generated images, manage favourites, and monitor job queues.
* **Image Generation:** Integrates with ComfyUI to generate images using SDXL, FLUX, and Qwen models based on given prompts.
* **Prompt Generation:** Automatic prompt generation using OpenWebUI or OpenRouter APIs with topic-based theming.
* **Scheduled Generation:** Automatically generates new images at a configurable time.
* **Favourites System:** Mark and manage favourite images.
* **Job Queue Management:** View and cancel running/pending image generation jobs.
* **Thumbnail Generation:** Automatic thumbnail creation for generated images.
* **Prompt Logging:** Maintains a log of recent prompts to avoid repetition.
* **Settings Management:** Web-based configuration editor for all settings.
* **Docker Support:** Comes with a `Dockerfile` and `docker-compose.yml` for easy setup and deployment.
* **Configurable:** Most options can be configured through a `user_config.cfg` file.
* **Configurable:** Most options can be configured through a `user_config.cfg` file or web interface.
* **Authentication:** Optional password protection for image creation.
* **Version Management:** Uses bump-my-version for version tracking.
## Prerequisites
@ -33,8 +40,8 @@ This project is a Flask-based web server designed to generate and display images
```
3. **Configure the application:**
* Copy the `user_config.cfg.sample` to `user_config.cfg`.
* Edit `user_config.cfg` with your settings. See the [Configuration](#configuration) section for more details.
* The `user_config.cfg` file will be automatically created from `user_config.cfg.sample` on first run if it doesn't exist.
* Edit `user_config.cfg` with your settings, or use the web-based settings page accessible by clicking the version number in the bottom right corner of the home page. See the [Configuration](#configuration) section for more details.
4. **Run the application:**
```bash
@ -51,8 +58,8 @@ This project is a Flask-based web server designed to generate and display images
```
2. **Configure the application:**
* Copy the `user_config.cfg.sample` to `user_config.cfg`.
* Edit `user_config.cfg` with your settings. The `comfyui_url` should be the address of your ComfyUI instance, accessible from within the Docker network (e.g., `http://host.docker.internal:8188` or your server's IP).
* The `user_config.cfg` file will be automatically created from `user_config.cfg.sample` on first run if it doesn't exist.
* Edit `user_config.cfg` with your settings, or use the web-based settings page accessible by clicking the version number in the bottom right corner of any page. The `comfyui_url` should be the address of your ComfyUI instance, accessible from within the Docker network (e.g., `http://host.docker.internal:8188` or your server's IP).
3. **Build and run with Docker Compose:**
```bash
@ -79,24 +86,40 @@ The application is configured via the `user_config.cfg` file.
| `[comfyui]` | `width` | The width of the generated image. | `1568` |
| `[comfyui]` | `height` | The height of the generated image. | `672` |
| `[comfyui]` | `topics` | A comma-separated list of topics to generate prompts from. | |
| `[comfyui]` | `FLUX` | Enable FLUX models (`True`/`False`). | `False` |
| `[comfyui]` | `ONLY_FLUX` | Only use FLUX models (`True`/`False`). | `False` |
| `[comfyui]` | `secondary_topic` | A secondary topic for prompt generation. | |
| `[comfyui]` | `flux` | Enable FLUX models (`True`/`False`). | `False` |
| `[comfyui]` | `qwen` | Enable Qwen models (`True`/`False`). | `False` |
| `[comfyui:flux]` | `models` | A comma-separated list of FLUX models. | `flux1-dev-Q4_0.gguf,flux1-schnell-Q4_0.gguf` |
| `[comfyui:qwen]` | `models` | A comma-separated list of Qwen models. | `qwen-image-Q4_K_S.gguf, qwen-image-Q2_K.gguf` |
| `[openwebui]` | `base_url` | The base URL for OpenWebUI. | `https://openwebui` |
| `[openwebui]` | `api_key` | The API key for OpenWebUI. | `sk-` |
| `[openwebui]` | `models` | A comma-separated list of models for OpenWebUI. | `llama3:latest,cogito:14b,gemma3:12b` |
| `[openrouter]` | `enabled` | Enable OpenRouter integration (`True`/`False`). | `False` |
| `[openrouter]` | `api_key` | The API key for OpenRouter. | |
| `[openrouter]` | `models` | A comma-separated list of models for OpenRouter. | `mistralai/mistral-7b-instruct:free,google/gemma-7b-it:free,meta-llama/llama-3.1-8b-instruct:free` |
| `[openrouter]` | `list_all_free_models` | List all free models (`True`/`False`). | `False` |
## Usage
* **Gallery:** Open your browser to `http://<server_ip>:<port>` to see the gallery of generated images.
* **Create Image:** Navigate to `/create` to manually trigger image generation.
* **Create Image:** Navigate to `/create` or `/create_image` to manually trigger image generation with various model options.
* **Job Queue:** Monitor and cancel running/pending jobs via the gallery interface.
* **API Endpoints:**
* `/api/queue` - Get current job queue details (JSON)
* `/cancel` - Cancel the current running job
## Dependencies
* Flask
* comfy_api_simplified
* APScheduler
* Pillow
* tenacity
* nest_asyncio
* openai
* websockets
* bump-my-version
* openwebui-chat-client
* And others, see `requirements.txt`.
## Contributing

View File

@ -5,6 +5,7 @@ import sys
import time
import os
import random
import shutil
from PIL import Image
import nest_asyncio
import json
@ -38,10 +39,21 @@ def save_prompt(prompt):
def load_config() -> configparser.ConfigParser:
"""Loads user configuration from ./user_config.cfg."""
"""Loads user configuration from ./user_config.cfg. If it doesn't exist, copies from user_config.cfg.sample."""
user_config = configparser.ConfigParser()
config_path = "./user_config.cfg"
sample_path = "./user_config.cfg.sample"
if not os.path.exists(config_path):
if os.path.exists(sample_path):
shutil.copy(sample_path, config_path)
logging.info("Configuration file copied from sample.")
else:
logging.error("Neither user_config.cfg nor user_config.cfg.sample found.")
sys.exit(1)
try:
user_config.read("./user_config.cfg")
user_config.read(config_path)
logging.debug("Configuration loaded successfully.")
return user_config
except KeyError as e:
@ -81,17 +93,20 @@ def get_details_from_png(path):
try:
date = datetime.fromtimestamp(os.path.getctime(path)).strftime("%d-%m-%Y")
with Image.open(path) as img:
try:
data = json.loads(img.info["prompt"])
prompt = data['6']['inputs']['text']
if '38' in data and 'unet_name' in data['38']['inputs']:
# Flux workflow
data = json.loads(img.info["prompt"])
prompt = data['6']['inputs']['text']
model = data['38']['inputs']['unet_name'].split(".")[0]
except KeyError:
elif '4' in data and 'ckpt_name' in data['4']['inputs']:
# SDXL workflow
data = json.loads(img.info["prompt"])
prompt = data['6']['inputs']['text']
model = data['4']['inputs']['ckpt_name']
return {"p":prompt,"m":model,"d":date} or {"p":"","m":"","c":""}
elif '80' in data and 'unet_name' in data['80']['inputs']:
# Qwen workflow
model = data['80']['inputs']['unet_name'].split(".")[0]
else:
model = "unknown"
return {"p":prompt,"m":model,"d":date}
except Exception as e:
print(f"Error reading metadata from {path}: {e}")
return ""
@ -146,8 +161,13 @@ def load_openrouter_models_from_config():
config = load_config()
if config["openrouter"].get("enabled", "False").lower() == "true":
models = config["openrouter"]["models"].split(",")
return sorted([model.strip() for model in models if model.strip()], key=str.lower)
return []
configured_models = sorted([model.strip() for model in models if model.strip()], key=str.lower)
free_models = []
if config["openrouter"].get("list_all_free_models", "False").lower() == "true":
from libs.openrouter import get_free_models
free_models = get_free_models()
return configured_models, free_models
return [], []
def load_openwebui_models_from_config():
config = load_config()
@ -160,38 +180,82 @@ def load_prompt_models_from_config():
"""Load and return a list of available prompt generation models (both OpenWebUI and OpenRouter)."""
config = load_config()
prompt_models = []
# Add OpenWebUI models if configured
if "openwebui" in config and "models" in config["openwebui"]:
openwebui_models = config["openwebui"]["models"].split(",")
prompt_models.extend([("openwebui", model.strip()) for model in openwebui_models if model.strip()])
# Add OpenRouter models if enabled and configured
if config["openrouter"].get("enabled", "False").lower() == "true" and "models" in config["openrouter"]:
openrouter_models = config["openrouter"]["models"].split(",")
prompt_models.extend([("openrouter", model.strip()) for model in openrouter_models if model.strip()])
# Add free models if flag is set
if config["openrouter"].get("list_all_free_models", "False").lower() == "true":
from libs.openrouter import get_free_models
free_models = get_free_models()
prompt_models.extend([("openrouter", model) for model in free_models])
return prompt_models
def build_user_content(topic: str = "random") -> str:
"""Build the user content string for prompt generation, including topic instructions and recent prompts avoidance."""
config = load_config()
topic_instruction = ""
selected_topic = ""
secondary_topic_instruction = ""
# Unique list of recent prompts
recent_prompts = list(set(load_recent_prompts()))
if topic == "random":
topics = [t.strip() for t in config["comfyui"]["topics"].split(",") if t.strip()]
selected_topic = random.choice(topics) if topics else ""
elif topic != "":
selected_topic = topic
else:
# Decide on whether to include a topic (e.g., 30% chance to include)
topics = [t.strip() for t in config["comfyui"]["topics"].split(",") if t.strip()]
if random.random() < 0.3 and topics:
selected_topic = random.choice(topics)
if selected_topic != "":
topic_instruction = f" Incorporate the theme of '{selected_topic}' into the new prompt."
# Add secondary topic if configured and not empty
secondary_topic = config["comfyui"].get("secondary_topic", "").strip()
if secondary_topic:
secondary_topic_instruction = f" Additionally incorporate the theme of '{secondary_topic}' into the new prompt."
user_content = (
"Can you generate me a really random image idea, Do not exceed 20 words. Use clear language, not poetic metaphors."
+ topic_instruction
+ secondary_topic_instruction
+ "Avoid prompts similar to the following:"
+ "\n".join(f"{i+1}. {p}" for i, p in enumerate(recent_prompts))
)
return user_content
def create_prompt_with_random_model(base_prompt: str, topic: str = "random"):
"""Create a prompt using a randomly selected model from OpenWebUI or OpenRouter.
If OpenWebUI fails, it will retry once. If it fails again, it will fallback to OpenRouter.
"""
prompt_models = load_prompt_models_from_config()
if not prompt_models:
logging.warning("No prompt generation models configured.")
return None
# Randomly select a model
service, model = random.choice(prompt_models)
# Import here to avoid circular imports
from libs.openwebui import create_prompt_on_openwebui
from libs.openrouter import create_prompt_on_openrouter
if service == "openwebui":
try:
# First attempt with OpenWebUI
@ -199,13 +263,13 @@ def create_prompt_with_random_model(base_prompt: str, topic: str = "random"):
result = create_prompt_on_openwebui(base_prompt, topic, model)
if result:
return result
# If first attempt returns None, try again
logging.warning("First OpenWebUI attempt failed. Retrying...")
result = create_prompt_on_openwebui(base_prompt, topic, model)
if result:
return result
# If second attempt fails, fallback to OpenRouter
logging.warning("Second OpenWebUI attempt failed. Falling back to OpenRouter...")
openrouter_models = [m for m in prompt_models if m[0] == "openrouter"]
@ -215,7 +279,7 @@ def create_prompt_with_random_model(base_prompt: str, topic: str = "random"):
else:
logging.error("No OpenRouter models configured for fallback.")
return "A colorful abstract composition" # Default fallback prompt
except Exception as e:
logging.error(f"Error with OpenWebUI: {e}")
# Fallback to OpenRouter on exception
@ -231,7 +295,7 @@ def create_prompt_with_random_model(base_prompt: str, topic: str = "random"):
else:
logging.error("No OpenRouter models configured for fallback.")
return "A colorful abstract composition" # Default fallback prompt
elif service == "openrouter":
try:
# Use OpenRouter
@ -239,7 +303,7 @@ def create_prompt_with_random_model(base_prompt: str, topic: str = "random"):
except Exception as e:
logging.error(f"Error with OpenRouter: {e}")
return "A colorful abstract composition" # Default fallback prompt
user_config = load_config()
output_folder = user_config["comfyui"]["output_dir"]

View File

@ -2,7 +2,7 @@ import random
import logging
from openai import OpenAI, RateLimitError
import nest_asyncio
from libs.generic import load_recent_prompts, load_config
from libs.generic import load_recent_prompts, load_config, build_user_content
from libs.openwebui import create_prompt_on_openwebui
import re
nest_asyncio.apply()
@ -14,49 +14,78 @@ LOG_FILE = "./prompts_log.jsonl"
user_config = load_config()
output_folder = user_config["comfyui"]["output_dir"]
def get_free_models():
"""Fetch all free models from OpenRouter."""
if user_config["openrouter"].get("enabled", "False").lower() != "true":
return []
try:
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=user_config["openrouter"]["api_key"],
)
all_models_response = client.models.list()
all_models = [m.id for m in all_models_response.data]
free_models = [m for m in all_models if "free" in m.lower()]
return sorted(free_models, key=str.lower)
except Exception as e:
logging.warning(f"Failed to fetch free models from OpenRouter: {e}")
return []
def create_prompt_on_openrouter(prompt: str, topic: str = "random", model: str = None) -> str:
"""Sends prompt to OpenRouter and returns the generated response."""
# Reload config to get latest values
config = load_config()
# Check if OpenRouter is enabled
if user_config["openrouter"].get("enabled", "False").lower() != "true":
if config["openrouter"].get("enabled", "False").lower() != "true":
logging.warning("OpenRouter is not enabled in the configuration.")
return ""
topic_instruction = ""
selected_topic = ""
# Unique list of recent prompts
recent_prompts = list(set(load_recent_prompts()))
if topic == "random":
topics = [t.strip() for t in user_config["comfyui"]["topics"].split(",") if t.strip()]
selected_topic = random.choice(topics) if topics else ""
elif topic != "":
selected_topic = topic
else:
# Decide on whether to include a topic (e.g., 30% chance to include)
topics = [t.strip() for t in user_config["comfyui"]["topics"].split(",") if t.strip()]
if random.random() < 0.3 and topics:
selected_topic = random.choice(topics)
if selected_topic != "":
topic_instruction = f" Incorporate the theme of '{selected_topic}' into the new prompt."
user_content = (
"Can you generate me a really random image idea, Do not exceed 10 words. Use clear language, not poetic metaphors."
+ topic_instruction
+ "Avoid prompts similar to the following:"
+ "\n".join(f"{i+1}. {p}" for i, p in enumerate(recent_prompts))
user_content = build_user_content(topic)
# Load configured models
configured_models = [m.strip() for m in user_config["openrouter"]["models"].split(",") if m.strip()]
if not configured_models:
logging.error("No OpenRouter models configured.")
return ""
# Create client early for model checking
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=user_config["openrouter"]["api_key"],
)
# Use the specified model or select a random model from the configured OpenRouter models
# Select model
if model:
# Use the specified model
model = model
original_model = model
# Always check if model exists on OpenRouter
try:
all_models_response = client.models.list()
all_models = [m.id for m in all_models_response.data]
if model not in all_models:
# Fallback to random free model from all OpenRouter models
free_models = [m for m in all_models if "free" in m.lower()]
if free_models:
model = random.choice(free_models)
logging.info(f"Specified model '{original_model}' not found on OpenRouter, falling back to free model: {model}")
else:
# No free models, fallback to random configured model
model = random.choice(configured_models)
logging.warning(f"Specified model '{original_model}' not found, no free models available on OpenRouter, using random configured model: {model}")
# else model exists, use it
except Exception as e:
logging.warning(f"Failed to fetch OpenRouter models for validation: {e}. Falling back to configured models.")
if model not in configured_models:
# Fallback to random free from configured
free_models = [m for m in configured_models if "free" in m.lower()]
if free_models:
model = random.choice(free_models)
logging.info(f"Specified model '{original_model}' not found, falling back to free configured model: {model}")
else:
model = random.choice(configured_models)
logging.warning(f"Specified model '{original_model}' not found, no free configured models available, using random configured model: {model}")
# else use the specified model
else:
# Select a random model from the configured OpenRouter models
models = [m.strip() for m in user_config["openrouter"]["models"].split(",") if m.strip()]
if not models:
logging.error("No OpenRouter models configured.")
return ""
model = random.choice(models)
model = random.choice(configured_models)
try:
client = OpenAI(
@ -64,25 +93,50 @@ def create_prompt_on_openrouter(prompt: str, topic: str = "random", model: str =
api_key=user_config["openrouter"]["api_key"],
)
completion = client.chat.completions.create(
model=model,
messages=[
{
"role": "system",
"content": (
"You are a prompt generator for Stable Diffusion. "
"Generate a detailed and imaginative prompt with a strong visual theme. "
"Focus on lighting, atmosphere, and artistic style. "
"Keep the prompt concise, no extra commentary or formatting."
),
},
{
"role": "user",
"content": user_content,
},
]
system_content = (
"You are a prompt generator for Stable Diffusion. "
"Generate a detailed and imaginative prompt with a strong visual theme. "
"Focus on lighting, atmosphere, and artistic style. "
"Keep the prompt concise, no extra commentary or formatting."
)
# Try the specified model first
try:
completion = client.chat.completions.create(
model=model,
messages=[
{
"role": "system",
"content": system_content,
},
{
"role": "user",
"content": user_content,
},
]
)
except Exception as e:
# If system message fails (e.g., model doesn't support developer instructions),
# retry with instructions included in user message
if "developer instruction" in str(e).lower() or "system message" in str(e).lower():
logging.info(f"Model {model} doesn't support system messages, retrying with instructions in user message")
combined_content = f"{system_content}\n\n{user_content}"
completion = client.chat.completions.create(
model=model,
messages=[
{
"role": "user",
"content": combined_content,
},
]
)
else:
# If it's another error, try fallback models
logging.warning(f"Error with model {model}: {e}. Trying fallback models.")
raise e
# If we get here, the completion was successful
prompt = completion.choices[0].message.content.strip('"')
match = re.search(r'"([^"]+)"', prompt)
if not match:
@ -106,5 +160,45 @@ def create_prompt_on_openrouter(prompt: str, topic: str = "random", model: str =
logging.error("No OpenWebUI models configured for fallback.")
return "A colorful abstract composition" # Final fallback
except Exception as e:
logging.error(f"Error generating prompt with OpenRouter: {e}")
# If the specified model fails, try fallback models
logging.warning(f"Primary model {model} failed: {e}. Trying fallback models.")
# Get all available models for fallback
configured_models = [m.strip() for m in user_config["openrouter"]["models"].split(",") if m.strip()]
free_models = get_free_models()
# Combine configured and free models, excluding the failed one
all_models = configured_models + free_models
fallback_models = [m for m in all_models if m != model]
if not fallback_models:
logging.error("No fallback models available.")
return ""
# Try up to 3 fallback models
for fallback_model in fallback_models[:3]:
try:
logging.info(f"Trying fallback model: {fallback_model}")
completion = client.chat.completions.create(
model=fallback_model,
messages=[
{
"role": "user",
"content": f"{system_content}\n\n{user_content}",
},
]
)
prompt = completion.choices[0].message.content.strip('"')
match = re.search(r'"([^"]+)"', prompt)
if not match:
match = re.search(r":\s*\n*\s*(.+)", prompt)
if match:
prompt = match.group(1)
logging.info(f"Successfully generated prompt with fallback model: {fallback_model}")
return prompt
except Exception as fallback_e:
logging.warning(f"Fallback model {fallback_model} also failed: {fallback_e}")
continue
logging.error("All models failed, including fallbacks.")
return ""

View File

@ -1,7 +1,7 @@
import random
import logging
import nest_asyncio
from libs.generic import load_recent_prompts, load_config
from libs.generic import load_recent_prompts, load_config, build_user_content
import re
from openwebui_chat_client import OpenWebUIClient
from datetime import datetime
@ -17,29 +17,9 @@ output_folder = user_config["comfyui"]["output_dir"]
def create_prompt_on_openwebui(prompt: str, topic: str = "random", model: str = None) -> str:
"""Sends prompt to OpenWebui and returns the generated response."""
topic_instruction = ""
selected_topic = ""
# Unique list of recent prompts
recent_prompts = list(set(load_recent_prompts()))
if topic == "random":
topics = [t.strip() for t in user_config["comfyui"]["topics"].split(",") if t.strip()]
selected_topic = random.choice(topics)
elif topic != "":
selected_topic = topic
else:
# Decide on whether to include a topic (e.g., 30% chance to include)
topics = [t.strip() for t in user_config["comfyui"]["topics"].split(",") if t.strip()]
if random.random() < 0.3 and topics:
selected_topic = random.choice(topics)
if selected_topic != "":
topic_instruction = f" Incorporate the theme of '{selected_topic}' into the new prompt."
user_content = (
"Can you generate me a really random image idea, Do not exceed 10 words. Use clear language, not poetic metaphors."
+ topic_instruction
+ "Avoid prompts similar to the following:"
+ "\n".join(f"{i+1}. {p}" for i, p in enumerate(recent_prompts))
)
# Reload config to get latest values
config = load_config()
user_content = build_user_content(topic)
if model:
# Use the specified model

View File

@ -38,8 +38,8 @@ def create():
# Load all models (SDXL, FLUX, and Qwen)
sdxl_models, flux_models, qwen_models = load_models_from_config()
openwebui_models = load_openwebui_models_from_config()
openrouter_models = load_openrouter_models_from_config()
openrouter_models, openrouter_free_models = load_openrouter_models_from_config()
queue_count = get_queue_count()
return render_template("create_image.html",
sdxx_models=sdxl_models,
@ -47,6 +47,7 @@ def create():
qwen_models=qwen_models,
openwebui_models=openwebui_models,
openrouter_models=openrouter_models,
openrouter_free_models=openrouter_free_models,
topics=load_topics_from_config(),
queue_count=queue_count)
@ -68,8 +69,8 @@ def create_image_page():
# Load all models (SDXL, FLUX, and Qwen)
sdxl_models, flux_models, qwen_models = load_models_from_config()
openwebui_models = load_openwebui_models_from_config()
openrouter_models = load_openrouter_models_from_config()
openrouter_models, openrouter_free_models = load_openrouter_models_from_config()
queue_count = get_queue_count()
return render_template("create_image.html",
sdxl_models=sdxl_models,
@ -77,6 +78,7 @@ def create_image_page():
qwen_models=qwen_models,
openwebui_models=openwebui_models,
openrouter_models=openrouter_models,
openrouter_free_models=openrouter_free_models,
topics=load_topics_from_config(),
queue_count=queue_count)

View File

@ -249,6 +249,13 @@
{% endfor %}
</optgroup>
{% endif %}
{% if openrouter_free_models %}
<optgroup label="OpenRouter Free">
{% for m in openrouter_free_models %}
<option value="openrouter:{{ m }}">{{ m }}</option>
{% endfor %}
</optgroup>
{% endif %}
</select>
</div>

View File

@ -13,11 +13,15 @@ output_dir = ./output/
prompt = "Generate a random detailed prompt for stable diffusion."
width = 1568
height = 672
topics =
topics =
secondary_topic =
FLUX = False
ONLY_FLUX = False
flux = False
qwen = False
only_flux = False
[comfyui:qwen]
models = qwen-image-Q4_K_S.gguf, qwen-image-Q2_K.gguf
[comfyui:flux]
models = flux1-dev-Q4_0.gguf,flux1-schnell-Q4_0.gguf
@ -29,4 +33,5 @@ models = llama3:latest,cogito:14b,gemma3:12b
[openrouter]
enabled = False
api_key =
models = mistralai/mistral-7b-instruct:free,google/gemma-7b-it:free,meta-llama/llama-3.1-8b-instruct:free
models = mistralai/mistral-7b-instruct:free,google/gemma-7b-it:free,meta-llama/llama-3.1-8b-instruct:free
list_all_free_models = False

View File

@ -1,433 +0,0 @@
{
"8": {
"inputs": {
"samples": [
"62",
1
],
"vae": [
"27",
0
]
},
"class_type": "VAEDecode",
"_meta": {
"title": "VAE Decode"
}
},
"22": {
"inputs": {
"clip_name1": "t5/t5xxl_fp8_e4m3fn.safetensors",
"clip_name2": "clip_l.safetensors",
"type": "flux",
"device": "default"
},
"class_type": "DualCLIPLoader",
"_meta": {
"title": "DualCLIPLoader"
}
},
"27": {
"inputs": {
"vae_name": "FLUX1/ae.safetensors"
},
"class_type": "VAELoader",
"_meta": {
"title": "Load VAE"
}
},
"32": {
"inputs": {
"upscale_model": [
"33",
0
],
"image": [
"8",
0
]
},
"class_type": "ImageUpscaleWithModel",
"_meta": {
"title": "Upscale Image (using Model)"
}
},
"33": {
"inputs": {
"model_name": "4x-UltraSharp.pth"
},
"class_type": "UpscaleModelLoader",
"_meta": {
"title": "Load Upscale Model"
}
},
"34": {
"inputs": {
"upscale_method": "lanczos",
"scale_by": 0.5,
"image": [
"32",
0
]
},
"class_type": "ImageScaleBy",
"_meta": {
"title": "Half size"
}
},
"35": {
"inputs": {
"unet_name": "flux1-dev-Q4_0.gguf"
},
"class_type": "UnetLoaderGGUF",
"_meta": {
"title": "Unet Loader (GGUF)"
}
},
"40": {
"inputs": {
"int": 20
},
"class_type": "Int Literal (Image Saver)",
"_meta": {
"title": "Generation Steps"
}
},
"41": {
"inputs": {
"width": 720,
"height": 1080,
"aspect_ratio": "custom",
"swap_dimensions": "Off",
"upscale_factor": 2,
"prescale_factor": 1,
"batch_size": 1
},
"class_type": "CR Aspect Ratio",
"_meta": {
"title": "CR Aspect Ratio"
}
},
"42": {
"inputs": {
"filename": "THISFILE",
"path": "",
"extension": "png",
"steps": [
"40",
0
],
"cfg": [
"52",
0
],
"modelname": "flux1-dev-Q4_0.gguf",
"sampler_name": [
"50",
1
],
"positive": [
"44",
0
],
"negative": [
"45",
0
],
"seed_value": [
"48",
0
],
"width": [
"41",
0
],
"height": [
"41",
1
],
"lossless_webp": true,
"quality_jpeg_or_webp": 100,
"optimize_png": false,
"counter": 0,
"denoise": [
"53",
0
],
"clip_skip": 0,
"time_format": "%Y-%m-%d-%H%M%S",
"save_workflow_as_json": true,
"embed_workflow": true,
"additional_hashes": "",
"download_civitai_data": true,
"easy_remix": true,
"speak_and_recognation": {
"__value__": [
false,
true
]
},
"images": [
"34",
0
]
},
"class_type": "Image Saver",
"_meta": {
"title": "CivitAI Image Saver"
}
},
"44": {
"inputs": {
"text": "",
"speak_and_recognation": {
"__value__": [
false,
true
]
}
},
"class_type": "ttN text",
"_meta": {
"title": "Positive Prompt T5"
}
},
"45": {
"inputs": {
"text": "text, watermark, deformed Avoid flat colors, poor lighting, and artificial elements. No unrealistic elements, low resolution, or flat colors. Avoid generic objects, poor lighting, and inconsistent styles, blurry, low-quality, distorted faces, overexposed lighting, extra limbs, bad anatomy, low contrast",
"speak_and_recognation": {
"__value__": [
false,
true
]
}
},
"class_type": "ttN text",
"_meta": {
"title": "Negative Prompt"
}
},
"47": {
"inputs": {
"text": [
"44",
0
],
"speak_and_recognation": {
"__value__": [
false,
true
]
},
"clip": [
"68",
1
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "CLIP Text Encode (Prompt)"
}
},
"48": {
"inputs": {
"seed": 903006749445372,
"increment": 1
},
"class_type": "Seed Generator (Image Saver)",
"_meta": {
"title": "Seed"
}
},
"49": {
"inputs": {
"scheduler": "beta"
},
"class_type": "Scheduler Selector (Comfy) (Image Saver)",
"_meta": {
"title": "Scheduler Selector"
}
},
"50": {
"inputs": {
"sampler_name": "euler"
},
"class_type": "Sampler Selector (Image Saver)",
"_meta": {
"title": "Sampler Selector (Image Saver)"
}
},
"51": {
"inputs": {
"images": [
"8",
0
]
},
"class_type": "PreviewImage",
"_meta": {
"title": "Preview Image"
}
},
"52": {
"inputs": {
"float": 3.5
},
"class_type": "Float Literal (Image Saver)",
"_meta": {
"title": "CFG"
}
},
"53": {
"inputs": {
"float": 1
},
"class_type": "Float Literal (Image Saver)",
"_meta": {
"title": "Denoise"
}
},
"60": {
"inputs": {
"clip_l": "",
"t5xxl": [
"44",
0
],
"guidance": [
"52",
0
],
"speak_and_recognation": {
"__value__": [
false,
true
]
},
"clip": [
"68",
1
]
},
"class_type": "CLIPTextEncodeFlux",
"_meta": {
"title": "CLIPTextEncodeFlux"
}
},
"62": {
"inputs": {
"noise": [
"65",
0
],
"guider": [
"67",
0
],
"sampler": [
"63",
0
],
"sigmas": [
"64",
0
],
"latent_image": [
"41",
5
]
},
"class_type": "SamplerCustomAdvanced",
"_meta": {
"title": "SamplerCustomAdvanced"
}
},
"63": {
"inputs": {
"sampler_name": [
"50",
0
]
},
"class_type": "KSamplerSelect",
"_meta": {
"title": "KSamplerSelect"
}
},
"64": {
"inputs": {
"scheduler": [
"49",
0
],
"steps": [
"40",
0
],
"denoise": [
"53",
0
],
"model": [
"68",
0
]
},
"class_type": "BasicScheduler",
"_meta": {
"title": "BasicScheduler"
}
},
"65": {
"inputs": {
"noise_seed": [
"48",
0
]
},
"class_type": "RandomNoise",
"_meta": {
"title": "RandomNoise"
}
},
"67": {
"inputs": {
"model": [
"68",
0
],
"conditioning": [
"47",
0
]
},
"class_type": "BasicGuider",
"_meta": {
"title": "BasicGuider"
}
},
"68": {
"inputs": {
"lora_01": "None",
"strength_01": 1,
"lora_02": "None",
"strength_02": 1,
"lora_03": "None",
"strength_03": 1,
"lora_04": "None",
"strength_04": 1,
"model": [
"35",
0
],
"clip": [
"22",
0
]
},
"class_type": "Lora Loader Stack (rgthree)",
"_meta": {
"title": "Lora Loader Stack (rgthree)"
}
}
}

View File

@ -1,45 +1,26 @@
{
"93": {
"3": {
"inputs": {
"text": "jpeg compression",
"speak_and_recognation": {
"__value__": [
false,
true
]
},
"clip": [
"126",
0
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "CLIP Text Encode (Prompt)"
}
},
"95": {
"inputs": {
"seed": 22,
"steps": 10,
"cfg": 4.5,
"seed": 367723847870487,
"steps": 8,
"cfg": 2.5,
"sampler_name": "euler",
"scheduler": "normal",
"scheduler": "simple",
"denoise": 1,
"model": [
"127",
"66",
0
],
"positive": [
"100",
"6",
0
],
"negative": [
"93",
"7",
0
],
"latent_image": [
"97",
"58",
0
]
},
@ -48,26 +29,40 @@
"title": "KSampler"
}
},
"97": {
"6": {
"inputs": {
"width": 1280,
"height": 768,
"length": 1,
"batch_size": 1
"text": "Cat sitting at desk, surrounded by taxidermied dinosaurs",
"clip": [
"38",
0
]
},
"class_type": "EmptyHunyuanLatentVideo",
"class_type": "CLIPTextEncode",
"_meta": {
"title": "EmptyHunyuanLatentVideo"
"title": "Positive"
}
},
"98": {
"7": {
"inputs": {
"text": "text, watermark, deformed Avoid flat colors, poor lighting, and artificial elements. No unrealistic elements, low resolution, or flat colors. Avoid generic objects, poor lighting, and inconsistent styles, blurry, low-quality, distorted faces, overexposed lighting, extra limbs, bad anatomy, low contrast",
"clip": [
"38",
0
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "Negative"
}
},
"8": {
"inputs": {
"samples": [
"95",
"3",
0
],
"vae": [
"128",
"39",
0
]
},
@ -76,86 +71,84 @@
"title": "VAE Decode"
}
},
"100": {
"38": {
"inputs": {
"text": "Terminator riding a push bike",
"speak_and_recognation": {
"__value__": [
false,
true
]
},
"clip": [
"126",
0
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "CLIP Text Encode (Prompt)"
}
},
"102": {
"inputs": {
"images": [
"129",
0
]
},
"class_type": "PreviewImage",
"_meta": {
"title": "Preview Image"
}
},
"126": {
"inputs": {
"clip_name": "Qwen2.5-VL-7B-Instruct-Q3_K_M.gguf",
"clip_name": "qwen_2.5_vl_7b_fp8_scaled.safetensors",
"type": "qwen_image",
"device": "cuda:1",
"virtual_vram_gb": 6,
"use_other_vram": true,
"expert_mode_allocations": ""
"device": "default"
},
"class_type": "CLIPLoaderGGUFDisTorchMultiGPU",
"class_type": "CLIPLoader",
"_meta": {
"title": "CLIPLoaderGGUFDisTorchMultiGPU"
"title": "Load CLIP"
}
},
"127": {
"39": {
"inputs": {
"unet_name": "qwen-image-Q2_K.gguf",
"device": "cuda:0",
"virtual_vram_gb": 6,
"use_other_vram": true,
"expert_mode_allocations": ""
"vae_name": "qwen_image_vae.safetensors"
},
"class_type": "UnetLoaderGGUFDisTorchMultiGPU",
"class_type": "VAELoader",
"_meta": {
"title": "UnetLoaderGGUFDisTorchMultiGPU"
"title": "Load VAE"
}
},
"128": {
"58": {
"inputs": {
"vae_name": "qwen_image_vae.safetensors",
"device": "cuda:1"
"width": 720,
"height": 1088,
"batch_size": 1
},
"class_type": "VAELoaderMultiGPU",
"class_type": "EmptySD3LatentImage",
"_meta": {
"title": "VAELoaderMultiGPU"
"title": "CR Aspect Ratio"
}
},
"129": {
"60": {
"inputs": {
"offload_model": true,
"offload_cache": true,
"anything": [
"98",
"filename_prefix": "ComfyUI",
"images": [
"8",
0
]
},
"class_type": "VRAMCleanup",
"class_type": "SaveImage",
"_meta": {
"title": "🎈VRAM-Cleanup"
"title": "Save Image"
}
},
"66": {
"inputs": {
"shift": 3.1000000000000005,
"model": [
"73",
0
]
},
"class_type": "ModelSamplingAuraFlow",
"_meta": {
"title": "ModelSamplingAuraFlow"
}
},
"73": {
"inputs": {
"lora_name": "Qwen-Image-Lightning-8steps-V1.0.safetensors",
"strength_model": 1,
"model": [
"80",
0
]
},
"class_type": "LoraLoaderModelOnly",
"_meta": {
"title": "LoraLoaderModelOnly"
}
},
"80": {
"inputs": {
"unet_name": "qwen-image-Q4_K_S.gguf"
},
"class_type": "UnetLoaderGGUF",
"_meta": {
"title": "Load Checkpoint"
}
}
}