Compare commits

..

No commits in common. "c27b2c8abab29030bf7dc809195349c62b1c693c" and "5c09bcd9e84b8a1f78558b18b2434bf4b5cb0957" have entirely different histories.

11 changed files with 654 additions and 389 deletions

View File

@ -1,5 +1,5 @@
[tool.bumpversion] [tool.bumpversion]
current_version = "0.5.5" current_version = "0.3.14"
parse = "(?P<major>\\d+)\\.(?P<minor>\\d+)\\.(?P<patch>\\d+)" parse = "(?P<major>\\d+)\\.(?P<minor>\\d+)\\.(?P<patch>\\d+)"
serialize = ["{major}.{minor}.{patch}"] serialize = ["{major}.{minor}.{patch}"]
replace = "{new_version}" replace = "{new_version}"

View File

@ -4,7 +4,7 @@ FROM python:3.11-slim
# Set the working directory in the container # Set the working directory in the container
WORKDIR /app WORKDIR /app
# Set version label # Set version label
ARG VERSION="0.5.5" ARG VERSION="0.3.14"
LABEL version=$VERSION LABEL version=$VERSION
# Copy project files into the container # Copy project files into the container

View File

@ -4,19 +4,12 @@ This project is a Flask-based web server designed to generate and display images
## Features ## Features
* **Web Interface:** A simple web interface to view generated images, manage favourites, and monitor job queues. * **Web Interface:** A simple web interface to view generated images.
* **Image Generation:** Integrates with ComfyUI to generate images using SDXL, FLUX, and Qwen models based on given prompts. * **Image Generation:** Integrates with ComfyUI to generate images based on given prompts and models.
* **Prompt Generation:** Automatic prompt generation using OpenWebUI or OpenRouter APIs with topic-based theming.
* **Scheduled Generation:** Automatically generates new images at a configurable time. * **Scheduled Generation:** Automatically generates new images at a configurable time.
* **Favourites System:** Mark and manage favourite images.
* **Job Queue Management:** View and cancel running/pending image generation jobs.
* **Thumbnail Generation:** Automatic thumbnail creation for generated images.
* **Prompt Logging:** Maintains a log of recent prompts to avoid repetition.
* **Settings Management:** Web-based configuration editor for all settings.
* **Docker Support:** Comes with a `Dockerfile` and `docker-compose.yml` for easy setup and deployment. * **Docker Support:** Comes with a `Dockerfile` and `docker-compose.yml` for easy setup and deployment.
* **Configurable:** Most options can be configured through a `user_config.cfg` file or web interface. * **Configurable:** Most options can be configured through a `user_config.cfg` file.
* **Authentication:** Optional password protection for image creation. * **Authentication:** Optional password protection for image creation.
* **Version Management:** Uses bump-my-version for version tracking.
## Prerequisites ## Prerequisites
@ -40,8 +33,8 @@ This project is a Flask-based web server designed to generate and display images
``` ```
3. **Configure the application:** 3. **Configure the application:**
* The `user_config.cfg` file will be automatically created from `user_config.cfg.sample` on first run if it doesn't exist. * Copy the `user_config.cfg.sample` to `user_config.cfg`.
* Edit `user_config.cfg` with your settings, or use the web-based settings page accessible by clicking the version number in the bottom right corner of the home page. See the [Configuration](#configuration) section for more details. * Edit `user_config.cfg` with your settings. See the [Configuration](#configuration) section for more details.
4. **Run the application:** 4. **Run the application:**
```bash ```bash
@ -58,8 +51,8 @@ This project is a Flask-based web server designed to generate and display images
``` ```
2. **Configure the application:** 2. **Configure the application:**
* The `user_config.cfg` file will be automatically created from `user_config.cfg.sample` on first run if it doesn't exist. * Copy the `user_config.cfg.sample` to `user_config.cfg`.
* Edit `user_config.cfg` with your settings, or use the web-based settings page accessible by clicking the version number in the bottom right corner of any page. The `comfyui_url` should be the address of your ComfyUI instance, accessible from within the Docker network (e.g., `http://host.docker.internal:8188` or your server's IP). * Edit `user_config.cfg` with your settings. The `comfyui_url` should be the address of your ComfyUI instance, accessible from within the Docker network (e.g., `http://host.docker.internal:8188` or your server's IP).
3. **Build and run with Docker Compose:** 3. **Build and run with Docker Compose:**
```bash ```bash
@ -86,40 +79,24 @@ The application is configured via the `user_config.cfg` file.
| `[comfyui]` | `width` | The width of the generated image. | `1568` | | `[comfyui]` | `width` | The width of the generated image. | `1568` |
| `[comfyui]` | `height` | The height of the generated image. | `672` | | `[comfyui]` | `height` | The height of the generated image. | `672` |
| `[comfyui]` | `topics` | A comma-separated list of topics to generate prompts from. | | | `[comfyui]` | `topics` | A comma-separated list of topics to generate prompts from. | |
| `[comfyui]` | `secondary_topic` | A secondary topic for prompt generation. | | | `[comfyui]` | `FLUX` | Enable FLUX models (`True`/`False`). | `False` |
| `[comfyui]` | `flux` | Enable FLUX models (`True`/`False`). | `False` | | `[comfyui]` | `ONLY_FLUX` | Only use FLUX models (`True`/`False`). | `False` |
| `[comfyui]` | `qwen` | Enable Qwen models (`True`/`False`). | `False` |
| `[comfyui:flux]` | `models` | A comma-separated list of FLUX models. | `flux1-dev-Q4_0.gguf,flux1-schnell-Q4_0.gguf` | | `[comfyui:flux]` | `models` | A comma-separated list of FLUX models. | `flux1-dev-Q4_0.gguf,flux1-schnell-Q4_0.gguf` |
| `[comfyui:qwen]` | `models` | A comma-separated list of Qwen models. | `qwen-image-Q4_K_S.gguf, qwen-image-Q2_K.gguf` |
| `[openwebui]` | `base_url` | The base URL for OpenWebUI. | `https://openwebui` | | `[openwebui]` | `base_url` | The base URL for OpenWebUI. | `https://openwebui` |
| `[openwebui]` | `api_key` | The API key for OpenWebUI. | `sk-` | | `[openwebui]` | `api_key` | The API key for OpenWebUI. | `sk-` |
| `[openwebui]` | `models` | A comma-separated list of models for OpenWebUI. | `llama3:latest,cogito:14b,gemma3:12b` | | `[openwebui]` | `models` | A comma-separated list of models for OpenWebUI. | `llama3:latest,cogito:14b,gemma3:12b` |
| `[openrouter]` | `enabled` | Enable OpenRouter integration (`True`/`False`). | `False` |
| `[openrouter]` | `api_key` | The API key for OpenRouter. | |
| `[openrouter]` | `models` | A comma-separated list of models for OpenRouter. | `mistralai/mistral-7b-instruct:free,google/gemma-7b-it:free,meta-llama/llama-3.1-8b-instruct:free` |
| `[openrouter]` | `list_all_free_models` | List all free models (`True`/`False`). | `False` |
## Usage ## Usage
* **Gallery:** Open your browser to `http://<server_ip>:<port>` to see the gallery of generated images. * **Gallery:** Open your browser to `http://<server_ip>:<port>` to see the gallery of generated images.
* **Create Image:** Navigate to `/create` or `/create_image` to manually trigger image generation with various model options. * **Create Image:** Navigate to `/create` to manually trigger image generation.
* **Job Queue:** Monitor and cancel running/pending jobs via the gallery interface.
* **API Endpoints:**
* `/api/queue` - Get current job queue details (JSON)
* `/cancel` - Cancel the current running job
## Dependencies ## Dependencies
* Flask * Flask
* comfy_api_simplified * comfy_api_simplified
* APScheduler * APScheduler
* Pillow * Pillow
* tenacity
* nest_asyncio
* openai
* websockets
* bump-my-version
* openwebui-chat-client
* And others, see `requirements.txt`. * And others, see `requirements.txt`.
## Contributing ## Contributing

View File

@ -5,7 +5,6 @@ import sys
import time import time
import os import os
import random import random
import shutil
from PIL import Image from PIL import Image
import nest_asyncio import nest_asyncio
import json import json
@ -39,21 +38,10 @@ def save_prompt(prompt):
def load_config() -> configparser.ConfigParser: def load_config() -> configparser.ConfigParser:
"""Loads user configuration from ./user_config.cfg. If it doesn't exist, copies from user_config.cfg.sample.""" """Loads user configuration from ./user_config.cfg."""
user_config = configparser.ConfigParser() user_config = configparser.ConfigParser()
config_path = "./user_config.cfg"
sample_path = "./user_config.cfg.sample"
if not os.path.exists(config_path):
if os.path.exists(sample_path):
shutil.copy(sample_path, config_path)
logging.info("Configuration file copied from sample.")
else:
logging.error("Neither user_config.cfg nor user_config.cfg.sample found.")
sys.exit(1)
try: try:
user_config.read(config_path) user_config.read("./user_config.cfg")
logging.debug("Configuration loaded successfully.") logging.debug("Configuration loaded successfully.")
return user_config return user_config
except KeyError as e: except KeyError as e:
@ -93,20 +81,17 @@ def get_details_from_png(path):
try: try:
date = datetime.fromtimestamp(os.path.getctime(path)).strftime("%d-%m-%Y") date = datetime.fromtimestamp(os.path.getctime(path)).strftime("%d-%m-%Y")
with Image.open(path) as img: with Image.open(path) as img:
data = json.loads(img.info["prompt"]) try:
prompt = data['6']['inputs']['text']
if '38' in data and 'unet_name' in data['38']['inputs']:
# Flux workflow # Flux workflow
data = json.loads(img.info["prompt"])
prompt = data['6']['inputs']['text']
model = data['38']['inputs']['unet_name'].split(".")[0] model = data['38']['inputs']['unet_name'].split(".")[0]
elif '4' in data and 'ckpt_name' in data['4']['inputs']: except KeyError:
# SDXL workflow # SDXL workflow
data = json.loads(img.info["prompt"])
prompt = data['6']['inputs']['text']
model = data['4']['inputs']['ckpt_name'] model = data['4']['inputs']['ckpt_name']
elif '80' in data and 'unet_name' in data['80']['inputs']: return {"p":prompt,"m":model,"d":date} or {"p":"","m":"","c":""}
# Qwen workflow
model = data['80']['inputs']['unet_name'].split(".")[0]
else:
model = "unknown"
return {"p":prompt,"m":model,"d":date}
except Exception as e: except Exception as e:
print(f"Error reading metadata from {path}: {e}") print(f"Error reading metadata from {path}: {e}")
return "" return ""
@ -161,13 +146,8 @@ def load_openrouter_models_from_config():
config = load_config() config = load_config()
if config["openrouter"].get("enabled", "False").lower() == "true": if config["openrouter"].get("enabled", "False").lower() == "true":
models = config["openrouter"]["models"].split(",") models = config["openrouter"]["models"].split(",")
configured_models = sorted([model.strip() for model in models if model.strip()], key=str.lower) return sorted([model.strip() for model in models if model.strip()], key=str.lower)
free_models = [] return []
if config["openrouter"].get("list_all_free_models", "False").lower() == "true":
from libs.openrouter import get_free_models
free_models = get_free_models()
return configured_models, free_models
return [], []
def load_openwebui_models_from_config(): def load_openwebui_models_from_config():
config = load_config() config = load_config()
@ -180,82 +160,38 @@ def load_prompt_models_from_config():
"""Load and return a list of available prompt generation models (both OpenWebUI and OpenRouter).""" """Load and return a list of available prompt generation models (both OpenWebUI and OpenRouter)."""
config = load_config() config = load_config()
prompt_models = [] prompt_models = []
# Add OpenWebUI models if configured # Add OpenWebUI models if configured
if "openwebui" in config and "models" in config["openwebui"]: if "openwebui" in config and "models" in config["openwebui"]:
openwebui_models = config["openwebui"]["models"].split(",") openwebui_models = config["openwebui"]["models"].split(",")
prompt_models.extend([("openwebui", model.strip()) for model in openwebui_models if model.strip()]) prompt_models.extend([("openwebui", model.strip()) for model in openwebui_models if model.strip()])
# Add OpenRouter models if enabled and configured # Add OpenRouter models if enabled and configured
if config["openrouter"].get("enabled", "False").lower() == "true" and "models" in config["openrouter"]: if config["openrouter"].get("enabled", "False").lower() == "true" and "models" in config["openrouter"]:
openrouter_models = config["openrouter"]["models"].split(",") openrouter_models = config["openrouter"]["models"].split(",")
prompt_models.extend([("openrouter", model.strip()) for model in openrouter_models if model.strip()]) prompt_models.extend([("openrouter", model.strip()) for model in openrouter_models if model.strip()])
# Add free models if flag is set
if config["openrouter"].get("list_all_free_models", "False").lower() == "true":
from libs.openrouter import get_free_models
free_models = get_free_models()
prompt_models.extend([("openrouter", model) for model in free_models])
return prompt_models return prompt_models
def build_user_content(topic: str = "random") -> str:
"""Build the user content string for prompt generation, including topic instructions and recent prompts avoidance."""
config = load_config()
topic_instruction = ""
selected_topic = ""
secondary_topic_instruction = ""
# Unique list of recent prompts
recent_prompts = list(set(load_recent_prompts()))
if topic == "random":
topics = [t.strip() for t in config["comfyui"]["topics"].split(",") if t.strip()]
selected_topic = random.choice(topics) if topics else ""
elif topic != "":
selected_topic = topic
else:
# Decide on whether to include a topic (e.g., 30% chance to include)
topics = [t.strip() for t in config["comfyui"]["topics"].split(",") if t.strip()]
if random.random() < 0.3 and topics:
selected_topic = random.choice(topics)
if selected_topic != "":
topic_instruction = f" Incorporate the theme of '{selected_topic}' into the new prompt."
# Add secondary topic if configured and not empty
secondary_topic = config["comfyui"].get("secondary_topic", "").strip()
if secondary_topic:
secondary_topic_instruction = f" Additionally incorporate the theme of '{secondary_topic}' into the new prompt."
user_content = (
"Can you generate me a really random image idea, Do not exceed 20 words. Use clear language, not poetic metaphors."
+ topic_instruction
+ secondary_topic_instruction
+ "Avoid prompts similar to the following:"
+ "\n".join(f"{i+1}. {p}" for i, p in enumerate(recent_prompts))
)
return user_content
def create_prompt_with_random_model(base_prompt: str, topic: str = "random"): def create_prompt_with_random_model(base_prompt: str, topic: str = "random"):
"""Create a prompt using a randomly selected model from OpenWebUI or OpenRouter. """Create a prompt using a randomly selected model from OpenWebUI or OpenRouter.
If OpenWebUI fails, it will retry once. If it fails again, it will fallback to OpenRouter. If OpenWebUI fails, it will retry once. If it fails again, it will fallback to OpenRouter.
""" """
prompt_models = load_prompt_models_from_config() prompt_models = load_prompt_models_from_config()
if not prompt_models: if not prompt_models:
logging.warning("No prompt generation models configured.") logging.warning("No prompt generation models configured.")
return None return None
# Randomly select a model # Randomly select a model
service, model = random.choice(prompt_models) service, model = random.choice(prompt_models)
# Import here to avoid circular imports # Import here to avoid circular imports
from libs.openwebui import create_prompt_on_openwebui from libs.openwebui import create_prompt_on_openwebui
from libs.openrouter import create_prompt_on_openrouter from libs.openrouter import create_prompt_on_openrouter
if service == "openwebui": if service == "openwebui":
try: try:
# First attempt with OpenWebUI # First attempt with OpenWebUI
@ -263,13 +199,13 @@ def create_prompt_with_random_model(base_prompt: str, topic: str = "random"):
result = create_prompt_on_openwebui(base_prompt, topic, model) result = create_prompt_on_openwebui(base_prompt, topic, model)
if result: if result:
return result return result
# If first attempt returns None, try again # If first attempt returns None, try again
logging.warning("First OpenWebUI attempt failed. Retrying...") logging.warning("First OpenWebUI attempt failed. Retrying...")
result = create_prompt_on_openwebui(base_prompt, topic, model) result = create_prompt_on_openwebui(base_prompt, topic, model)
if result: if result:
return result return result
# If second attempt fails, fallback to OpenRouter # If second attempt fails, fallback to OpenRouter
logging.warning("Second OpenWebUI attempt failed. Falling back to OpenRouter...") logging.warning("Second OpenWebUI attempt failed. Falling back to OpenRouter...")
openrouter_models = [m for m in prompt_models if m[0] == "openrouter"] openrouter_models = [m for m in prompt_models if m[0] == "openrouter"]
@ -279,7 +215,7 @@ def create_prompt_with_random_model(base_prompt: str, topic: str = "random"):
else: else:
logging.error("No OpenRouter models configured for fallback.") logging.error("No OpenRouter models configured for fallback.")
return "A colorful abstract composition" # Default fallback prompt return "A colorful abstract composition" # Default fallback prompt
except Exception as e: except Exception as e:
logging.error(f"Error with OpenWebUI: {e}") logging.error(f"Error with OpenWebUI: {e}")
# Fallback to OpenRouter on exception # Fallback to OpenRouter on exception
@ -295,7 +231,7 @@ def create_prompt_with_random_model(base_prompt: str, topic: str = "random"):
else: else:
logging.error("No OpenRouter models configured for fallback.") logging.error("No OpenRouter models configured for fallback.")
return "A colorful abstract composition" # Default fallback prompt return "A colorful abstract composition" # Default fallback prompt
elif service == "openrouter": elif service == "openrouter":
try: try:
# Use OpenRouter # Use OpenRouter
@ -303,7 +239,7 @@ def create_prompt_with_random_model(base_prompt: str, topic: str = "random"):
except Exception as e: except Exception as e:
logging.error(f"Error with OpenRouter: {e}") logging.error(f"Error with OpenRouter: {e}")
return "A colorful abstract composition" # Default fallback prompt return "A colorful abstract composition" # Default fallback prompt
user_config = load_config() user_config = load_config()
output_folder = user_config["comfyui"]["output_dir"] output_folder = user_config["comfyui"]["output_dir"]

View File

@ -2,7 +2,7 @@ import random
import logging import logging
from openai import OpenAI, RateLimitError from openai import OpenAI, RateLimitError
import nest_asyncio import nest_asyncio
from libs.generic import load_recent_prompts, load_config, build_user_content from libs.generic import load_recent_prompts, load_config
from libs.openwebui import create_prompt_on_openwebui from libs.openwebui import create_prompt_on_openwebui
import re import re
nest_asyncio.apply() nest_asyncio.apply()
@ -14,78 +14,49 @@ LOG_FILE = "./prompts_log.jsonl"
user_config = load_config() user_config = load_config()
output_folder = user_config["comfyui"]["output_dir"] output_folder = user_config["comfyui"]["output_dir"]
def get_free_models():
"""Fetch all free models from OpenRouter."""
if user_config["openrouter"].get("enabled", "False").lower() != "true":
return []
try:
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=user_config["openrouter"]["api_key"],
)
all_models_response = client.models.list()
all_models = [m.id for m in all_models_response.data]
free_models = [m for m in all_models if "free" in m.lower()]
return sorted(free_models, key=str.lower)
except Exception as e:
logging.warning(f"Failed to fetch free models from OpenRouter: {e}")
return []
def create_prompt_on_openrouter(prompt: str, topic: str = "random", model: str = None) -> str: def create_prompt_on_openrouter(prompt: str, topic: str = "random", model: str = None) -> str:
"""Sends prompt to OpenRouter and returns the generated response.""" """Sends prompt to OpenRouter and returns the generated response."""
# Reload config to get latest values
config = load_config()
# Check if OpenRouter is enabled # Check if OpenRouter is enabled
if config["openrouter"].get("enabled", "False").lower() != "true": if user_config["openrouter"].get("enabled", "False").lower() != "true":
logging.warning("OpenRouter is not enabled in the configuration.") logging.warning("OpenRouter is not enabled in the configuration.")
return "" return ""
topic_instruction = ""
selected_topic = ""
# Unique list of recent prompts
recent_prompts = list(set(load_recent_prompts()))
if topic == "random":
topics = [t.strip() for t in user_config["comfyui"]["topics"].split(",") if t.strip()]
selected_topic = random.choice(topics) if topics else ""
elif topic != "":
selected_topic = topic
else:
# Decide on whether to include a topic (e.g., 30% chance to include)
topics = [t.strip() for t in user_config["comfyui"]["topics"].split(",") if t.strip()]
if random.random() < 0.3 and topics:
selected_topic = random.choice(topics)
if selected_topic != "":
topic_instruction = f" Incorporate the theme of '{selected_topic}' into the new prompt."
user_content = build_user_content(topic) user_content = (
"Can you generate me a really random image idea, Do not exceed 10 words. Use clear language, not poetic metaphors."
# Load configured models + topic_instruction
configured_models = [m.strip() for m in user_config["openrouter"]["models"].split(",") if m.strip()] + "Avoid prompts similar to the following:"
if not configured_models: + "\n".join(f"{i+1}. {p}" for i, p in enumerate(recent_prompts))
logging.error("No OpenRouter models configured.")
return ""
# Create client early for model checking
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=user_config["openrouter"]["api_key"],
) )
# Select model # Use the specified model or select a random model from the configured OpenRouter models
if model: if model:
original_model = model # Use the specified model
# Always check if model exists on OpenRouter model = model
try:
all_models_response = client.models.list()
all_models = [m.id for m in all_models_response.data]
if model not in all_models:
# Fallback to random free model from all OpenRouter models
free_models = [m for m in all_models if "free" in m.lower()]
if free_models:
model = random.choice(free_models)
logging.info(f"Specified model '{original_model}' not found on OpenRouter, falling back to free model: {model}")
else:
# No free models, fallback to random configured model
model = random.choice(configured_models)
logging.warning(f"Specified model '{original_model}' not found, no free models available on OpenRouter, using random configured model: {model}")
# else model exists, use it
except Exception as e:
logging.warning(f"Failed to fetch OpenRouter models for validation: {e}. Falling back to configured models.")
if model not in configured_models:
# Fallback to random free from configured
free_models = [m for m in configured_models if "free" in m.lower()]
if free_models:
model = random.choice(free_models)
logging.info(f"Specified model '{original_model}' not found, falling back to free configured model: {model}")
else:
model = random.choice(configured_models)
logging.warning(f"Specified model '{original_model}' not found, no free configured models available, using random configured model: {model}")
# else use the specified model
else: else:
model = random.choice(configured_models) # Select a random model from the configured OpenRouter models
models = [m.strip() for m in user_config["openrouter"]["models"].split(",") if m.strip()]
if not models:
logging.error("No OpenRouter models configured.")
return ""
model = random.choice(models)
try: try:
client = OpenAI( client = OpenAI(
@ -93,50 +64,25 @@ def create_prompt_on_openrouter(prompt: str, topic: str = "random", model: str =
api_key=user_config["openrouter"]["api_key"], api_key=user_config["openrouter"]["api_key"],
) )
system_content = ( completion = client.chat.completions.create(
"You are a prompt generator for Stable Diffusion. " model=model,
"Generate a detailed and imaginative prompt with a strong visual theme. " messages=[
"Focus on lighting, atmosphere, and artistic style. " {
"Keep the prompt concise, no extra commentary or formatting." "role": "system",
"content": (
"You are a prompt generator for Stable Diffusion. "
"Generate a detailed and imaginative prompt with a strong visual theme. "
"Focus on lighting, atmosphere, and artistic style. "
"Keep the prompt concise, no extra commentary or formatting."
),
},
{
"role": "user",
"content": user_content,
},
]
) )
# Try the specified model first
try:
completion = client.chat.completions.create(
model=model,
messages=[
{
"role": "system",
"content": system_content,
},
{
"role": "user",
"content": user_content,
},
]
)
except Exception as e:
# If system message fails (e.g., model doesn't support developer instructions),
# retry with instructions included in user message
if "developer instruction" in str(e).lower() or "system message" in str(e).lower():
logging.info(f"Model {model} doesn't support system messages, retrying with instructions in user message")
combined_content = f"{system_content}\n\n{user_content}"
completion = client.chat.completions.create(
model=model,
messages=[
{
"role": "user",
"content": combined_content,
},
]
)
else:
# If it's another error, try fallback models
logging.warning(f"Error with model {model}: {e}. Trying fallback models.")
raise e
# If we get here, the completion was successful
prompt = completion.choices[0].message.content.strip('"') prompt = completion.choices[0].message.content.strip('"')
match = re.search(r'"([^"]+)"', prompt) match = re.search(r'"([^"]+)"', prompt)
if not match: if not match:
@ -160,45 +106,5 @@ def create_prompt_on_openrouter(prompt: str, topic: str = "random", model: str =
logging.error("No OpenWebUI models configured for fallback.") logging.error("No OpenWebUI models configured for fallback.")
return "A colorful abstract composition" # Final fallback return "A colorful abstract composition" # Final fallback
except Exception as e: except Exception as e:
# If the specified model fails, try fallback models logging.error(f"Error generating prompt with OpenRouter: {e}")
logging.warning(f"Primary model {model} failed: {e}. Trying fallback models.")
# Get all available models for fallback
configured_models = [m.strip() for m in user_config["openrouter"]["models"].split(",") if m.strip()]
free_models = get_free_models()
# Combine configured and free models, excluding the failed one
all_models = configured_models + free_models
fallback_models = [m for m in all_models if m != model]
if not fallback_models:
logging.error("No fallback models available.")
return ""
# Try up to 3 fallback models
for fallback_model in fallback_models[:3]:
try:
logging.info(f"Trying fallback model: {fallback_model}")
completion = client.chat.completions.create(
model=fallback_model,
messages=[
{
"role": "user",
"content": f"{system_content}\n\n{user_content}",
},
]
)
prompt = completion.choices[0].message.content.strip('"')
match = re.search(r'"([^"]+)"', prompt)
if not match:
match = re.search(r":\s*\n*\s*(.+)", prompt)
if match:
prompt = match.group(1)
logging.info(f"Successfully generated prompt with fallback model: {fallback_model}")
return prompt
except Exception as fallback_e:
logging.warning(f"Fallback model {fallback_model} also failed: {fallback_e}")
continue
logging.error("All models failed, including fallbacks.")
return "" return ""

View File

@ -1,7 +1,7 @@
import random import random
import logging import logging
import nest_asyncio import nest_asyncio
from libs.generic import load_recent_prompts, load_config, build_user_content from libs.generic import load_recent_prompts, load_config
import re import re
from openwebui_chat_client import OpenWebUIClient from openwebui_chat_client import OpenWebUIClient
from datetime import datetime from datetime import datetime
@ -17,9 +17,29 @@ output_folder = user_config["comfyui"]["output_dir"]
def create_prompt_on_openwebui(prompt: str, topic: str = "random", model: str = None) -> str: def create_prompt_on_openwebui(prompt: str, topic: str = "random", model: str = None) -> str:
"""Sends prompt to OpenWebui and returns the generated response.""" """Sends prompt to OpenWebui and returns the generated response."""
# Reload config to get latest values topic_instruction = ""
config = load_config() selected_topic = ""
user_content = build_user_content(topic) # Unique list of recent prompts
recent_prompts = list(set(load_recent_prompts()))
if topic == "random":
topics = [t.strip() for t in user_config["comfyui"]["topics"].split(",") if t.strip()]
selected_topic = random.choice(topics)
elif topic != "":
selected_topic = topic
else:
# Decide on whether to include a topic (e.g., 30% chance to include)
topics = [t.strip() for t in user_config["comfyui"]["topics"].split(",") if t.strip()]
if random.random() < 0.3 and topics:
selected_topic = random.choice(topics)
if selected_topic != "":
topic_instruction = f" Incorporate the theme of '{selected_topic}' into the new prompt."
user_content = (
"Can you generate me a really random image idea, Do not exceed 10 words. Use clear language, not poetic metaphors."
+ topic_instruction
+ "Avoid prompts similar to the following:"
+ "\n".join(f"{i+1}. {p}" for i, p in enumerate(recent_prompts))
)
if model: if model:
# Use the specified model # Use the specified model

View File

@ -38,8 +38,8 @@ def create():
# Load all models (SDXL, FLUX, and Qwen) # Load all models (SDXL, FLUX, and Qwen)
sdxl_models, flux_models, qwen_models = load_models_from_config() sdxl_models, flux_models, qwen_models = load_models_from_config()
openwebui_models = load_openwebui_models_from_config() openwebui_models = load_openwebui_models_from_config()
openrouter_models, openrouter_free_models = load_openrouter_models_from_config() openrouter_models = load_openrouter_models_from_config()
queue_count = get_queue_count() queue_count = get_queue_count()
return render_template("create_image.html", return render_template("create_image.html",
sdxx_models=sdxl_models, sdxx_models=sdxl_models,
@ -47,7 +47,6 @@ def create():
qwen_models=qwen_models, qwen_models=qwen_models,
openwebui_models=openwebui_models, openwebui_models=openwebui_models,
openrouter_models=openrouter_models, openrouter_models=openrouter_models,
openrouter_free_models=openrouter_free_models,
topics=load_topics_from_config(), topics=load_topics_from_config(),
queue_count=queue_count) queue_count=queue_count)
@ -69,8 +68,8 @@ def create_image_page():
# Load all models (SDXL, FLUX, and Qwen) # Load all models (SDXL, FLUX, and Qwen)
sdxl_models, flux_models, qwen_models = load_models_from_config() sdxl_models, flux_models, qwen_models = load_models_from_config()
openwebui_models = load_openwebui_models_from_config() openwebui_models = load_openwebui_models_from_config()
openrouter_models, openrouter_free_models = load_openrouter_models_from_config() openrouter_models = load_openrouter_models_from_config()
queue_count = get_queue_count() queue_count = get_queue_count()
return render_template("create_image.html", return render_template("create_image.html",
sdxl_models=sdxl_models, sdxl_models=sdxl_models,
@ -78,7 +77,6 @@ def create_image_page():
qwen_models=qwen_models, qwen_models=qwen_models,
openwebui_models=openwebui_models, openwebui_models=openwebui_models,
openrouter_models=openrouter_models, openrouter_models=openrouter_models,
openrouter_free_models=openrouter_free_models,
topics=load_topics_from_config(), topics=load_topics_from_config(),
queue_count=queue_count) queue_count=queue_count)

View File

@ -249,13 +249,6 @@
{% endfor %} {% endfor %}
</optgroup> </optgroup>
{% endif %} {% endif %}
{% if openrouter_free_models %}
<optgroup label="OpenRouter Free">
{% for m in openrouter_free_models %}
<option value="openrouter:{{ m }}">{{ m }}</option>
{% endfor %}
</optgroup>
{% endif %}
</select> </select>
</div> </div>

View File

@ -13,15 +13,11 @@ output_dir = ./output/
prompt = "Generate a random detailed prompt for stable diffusion." prompt = "Generate a random detailed prompt for stable diffusion."
width = 1568 width = 1568
height = 672 height = 672
topics = topics =
secondary_topic =
flux = False FLUX = False
qwen = False ONLY_FLUX = False
only_flux = False
[comfyui:qwen]
models = qwen-image-Q4_K_S.gguf, qwen-image-Q2_K.gguf
[comfyui:flux] [comfyui:flux]
models = flux1-dev-Q4_0.gguf,flux1-schnell-Q4_0.gguf models = flux1-dev-Q4_0.gguf,flux1-schnell-Q4_0.gguf
@ -33,5 +29,4 @@ models = llama3:latest,cogito:14b,gemma3:12b
[openrouter] [openrouter]
enabled = False enabled = False
api_key = api_key =
models = mistralai/mistral-7b-instruct:free,google/gemma-7b-it:free,meta-llama/llama-3.1-8b-instruct:free models = mistralai/mistral-7b-instruct:free,google/gemma-7b-it:free,meta-llama/llama-3.1-8b-instruct:free
list_all_free_models = False

433
workflow_flux_original.json Normal file
View File

@ -0,0 +1,433 @@
{
"8": {
"inputs": {
"samples": [
"62",
1
],
"vae": [
"27",
0
]
},
"class_type": "VAEDecode",
"_meta": {
"title": "VAE Decode"
}
},
"22": {
"inputs": {
"clip_name1": "t5/t5xxl_fp8_e4m3fn.safetensors",
"clip_name2": "clip_l.safetensors",
"type": "flux",
"device": "default"
},
"class_type": "DualCLIPLoader",
"_meta": {
"title": "DualCLIPLoader"
}
},
"27": {
"inputs": {
"vae_name": "FLUX1/ae.safetensors"
},
"class_type": "VAELoader",
"_meta": {
"title": "Load VAE"
}
},
"32": {
"inputs": {
"upscale_model": [
"33",
0
],
"image": [
"8",
0
]
},
"class_type": "ImageUpscaleWithModel",
"_meta": {
"title": "Upscale Image (using Model)"
}
},
"33": {
"inputs": {
"model_name": "4x-UltraSharp.pth"
},
"class_type": "UpscaleModelLoader",
"_meta": {
"title": "Load Upscale Model"
}
},
"34": {
"inputs": {
"upscale_method": "lanczos",
"scale_by": 0.5,
"image": [
"32",
0
]
},
"class_type": "ImageScaleBy",
"_meta": {
"title": "Half size"
}
},
"35": {
"inputs": {
"unet_name": "flux1-dev-Q4_0.gguf"
},
"class_type": "UnetLoaderGGUF",
"_meta": {
"title": "Unet Loader (GGUF)"
}
},
"40": {
"inputs": {
"int": 20
},
"class_type": "Int Literal (Image Saver)",
"_meta": {
"title": "Generation Steps"
}
},
"41": {
"inputs": {
"width": 720,
"height": 1080,
"aspect_ratio": "custom",
"swap_dimensions": "Off",
"upscale_factor": 2,
"prescale_factor": 1,
"batch_size": 1
},
"class_type": "CR Aspect Ratio",
"_meta": {
"title": "CR Aspect Ratio"
}
},
"42": {
"inputs": {
"filename": "THISFILE",
"path": "",
"extension": "png",
"steps": [
"40",
0
],
"cfg": [
"52",
0
],
"modelname": "flux1-dev-Q4_0.gguf",
"sampler_name": [
"50",
1
],
"positive": [
"44",
0
],
"negative": [
"45",
0
],
"seed_value": [
"48",
0
],
"width": [
"41",
0
],
"height": [
"41",
1
],
"lossless_webp": true,
"quality_jpeg_or_webp": 100,
"optimize_png": false,
"counter": 0,
"denoise": [
"53",
0
],
"clip_skip": 0,
"time_format": "%Y-%m-%d-%H%M%S",
"save_workflow_as_json": true,
"embed_workflow": true,
"additional_hashes": "",
"download_civitai_data": true,
"easy_remix": true,
"speak_and_recognation": {
"__value__": [
false,
true
]
},
"images": [
"34",
0
]
},
"class_type": "Image Saver",
"_meta": {
"title": "CivitAI Image Saver"
}
},
"44": {
"inputs": {
"text": "",
"speak_and_recognation": {
"__value__": [
false,
true
]
}
},
"class_type": "ttN text",
"_meta": {
"title": "Positive Prompt T5"
}
},
"45": {
"inputs": {
"text": "text, watermark, deformed Avoid flat colors, poor lighting, and artificial elements. No unrealistic elements, low resolution, or flat colors. Avoid generic objects, poor lighting, and inconsistent styles, blurry, low-quality, distorted faces, overexposed lighting, extra limbs, bad anatomy, low contrast",
"speak_and_recognation": {
"__value__": [
false,
true
]
}
},
"class_type": "ttN text",
"_meta": {
"title": "Negative Prompt"
}
},
"47": {
"inputs": {
"text": [
"44",
0
],
"speak_and_recognation": {
"__value__": [
false,
true
]
},
"clip": [
"68",
1
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "CLIP Text Encode (Prompt)"
}
},
"48": {
"inputs": {
"seed": 903006749445372,
"increment": 1
},
"class_type": "Seed Generator (Image Saver)",
"_meta": {
"title": "Seed"
}
},
"49": {
"inputs": {
"scheduler": "beta"
},
"class_type": "Scheduler Selector (Comfy) (Image Saver)",
"_meta": {
"title": "Scheduler Selector"
}
},
"50": {
"inputs": {
"sampler_name": "euler"
},
"class_type": "Sampler Selector (Image Saver)",
"_meta": {
"title": "Sampler Selector (Image Saver)"
}
},
"51": {
"inputs": {
"images": [
"8",
0
]
},
"class_type": "PreviewImage",
"_meta": {
"title": "Preview Image"
}
},
"52": {
"inputs": {
"float": 3.5
},
"class_type": "Float Literal (Image Saver)",
"_meta": {
"title": "CFG"
}
},
"53": {
"inputs": {
"float": 1
},
"class_type": "Float Literal (Image Saver)",
"_meta": {
"title": "Denoise"
}
},
"60": {
"inputs": {
"clip_l": "",
"t5xxl": [
"44",
0
],
"guidance": [
"52",
0
],
"speak_and_recognation": {
"__value__": [
false,
true
]
},
"clip": [
"68",
1
]
},
"class_type": "CLIPTextEncodeFlux",
"_meta": {
"title": "CLIPTextEncodeFlux"
}
},
"62": {
"inputs": {
"noise": [
"65",
0
],
"guider": [
"67",
0
],
"sampler": [
"63",
0
],
"sigmas": [
"64",
0
],
"latent_image": [
"41",
5
]
},
"class_type": "SamplerCustomAdvanced",
"_meta": {
"title": "SamplerCustomAdvanced"
}
},
"63": {
"inputs": {
"sampler_name": [
"50",
0
]
},
"class_type": "KSamplerSelect",
"_meta": {
"title": "KSamplerSelect"
}
},
"64": {
"inputs": {
"scheduler": [
"49",
0
],
"steps": [
"40",
0
],
"denoise": [
"53",
0
],
"model": [
"68",
0
]
},
"class_type": "BasicScheduler",
"_meta": {
"title": "BasicScheduler"
}
},
"65": {
"inputs": {
"noise_seed": [
"48",
0
]
},
"class_type": "RandomNoise",
"_meta": {
"title": "RandomNoise"
}
},
"67": {
"inputs": {
"model": [
"68",
0
],
"conditioning": [
"47",
0
]
},
"class_type": "BasicGuider",
"_meta": {
"title": "BasicGuider"
}
},
"68": {
"inputs": {
"lora_01": "None",
"strength_01": 1,
"lora_02": "None",
"strength_02": 1,
"lora_03": "None",
"strength_03": 1,
"lora_04": "None",
"strength_04": 1,
"model": [
"35",
0
],
"clip": [
"22",
0
]
},
"class_type": "Lora Loader Stack (rgthree)",
"_meta": {
"title": "Lora Loader Stack (rgthree)"
}
}
}

View File

@ -1,26 +1,45 @@
{ {
"3": { "93": {
"inputs": { "inputs": {
"seed": 367723847870487, "text": "jpeg compression",
"steps": 8, "speak_and_recognation": {
"cfg": 2.5, "__value__": [
false,
true
]
},
"clip": [
"126",
0
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "CLIP Text Encode (Prompt)"
}
},
"95": {
"inputs": {
"seed": 22,
"steps": 10,
"cfg": 4.5,
"sampler_name": "euler", "sampler_name": "euler",
"scheduler": "simple", "scheduler": "normal",
"denoise": 1, "denoise": 1,
"model": [ "model": [
"66", "127",
0 0
], ],
"positive": [ "positive": [
"6", "100",
0 0
], ],
"negative": [ "negative": [
"7", "93",
0 0
], ],
"latent_image": [ "latent_image": [
"58", "97",
0 0
] ]
}, },
@ -29,40 +48,26 @@
"title": "KSampler" "title": "KSampler"
} }
}, },
"6": { "97": {
"inputs": { "inputs": {
"text": "Cat sitting at desk, surrounded by taxidermied dinosaurs", "width": 1280,
"clip": [ "height": 768,
"38", "length": 1,
0 "batch_size": 1
]
}, },
"class_type": "CLIPTextEncode", "class_type": "EmptyHunyuanLatentVideo",
"_meta": { "_meta": {
"title": "Positive" "title": "EmptyHunyuanLatentVideo"
} }
}, },
"7": { "98": {
"inputs": {
"text": "text, watermark, deformed Avoid flat colors, poor lighting, and artificial elements. No unrealistic elements, low resolution, or flat colors. Avoid generic objects, poor lighting, and inconsistent styles, blurry, low-quality, distorted faces, overexposed lighting, extra limbs, bad anatomy, low contrast",
"clip": [
"38",
0
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "Negative"
}
},
"8": {
"inputs": { "inputs": {
"samples": [ "samples": [
"3", "95",
0 0
], ],
"vae": [ "vae": [
"39", "128",
0 0
] ]
}, },
@ -71,84 +76,86 @@
"title": "VAE Decode" "title": "VAE Decode"
} }
}, },
"38": { "100": {
"inputs": { "inputs": {
"clip_name": "qwen_2.5_vl_7b_fp8_scaled.safetensors", "text": "Terminator riding a push bike",
"type": "qwen_image", "speak_and_recognation": {
"device": "default" "__value__": [
false,
true
]
},
"clip": [
"126",
0
]
}, },
"class_type": "CLIPLoader", "class_type": "CLIPTextEncode",
"_meta": { "_meta": {
"title": "Load CLIP" "title": "CLIP Text Encode (Prompt)"
} }
}, },
"39": { "102": {
"inputs": { "inputs": {
"vae_name": "qwen_image_vae.safetensors"
},
"class_type": "VAELoader",
"_meta": {
"title": "Load VAE"
}
},
"58": {
"inputs": {
"width": 720,
"height": 1088,
"batch_size": 1
},
"class_type": "EmptySD3LatentImage",
"_meta": {
"title": "CR Aspect Ratio"
}
},
"60": {
"inputs": {
"filename_prefix": "ComfyUI",
"images": [ "images": [
"8", "129",
0 0
] ]
}, },
"class_type": "SaveImage", "class_type": "PreviewImage",
"_meta": { "_meta": {
"title": "Save Image" "title": "Preview Image"
} }
}, },
"66": { "126": {
"inputs": { "inputs": {
"shift": 3.1000000000000005, "clip_name": "Qwen2.5-VL-7B-Instruct-Q3_K_M.gguf",
"model": [ "type": "qwen_image",
"73", "device": "cuda:1",
"virtual_vram_gb": 6,
"use_other_vram": true,
"expert_mode_allocations": ""
},
"class_type": "CLIPLoaderGGUFDisTorchMultiGPU",
"_meta": {
"title": "CLIPLoaderGGUFDisTorchMultiGPU"
}
},
"127": {
"inputs": {
"unet_name": "qwen-image-Q2_K.gguf",
"device": "cuda:0",
"virtual_vram_gb": 6,
"use_other_vram": true,
"expert_mode_allocations": ""
},
"class_type": "UnetLoaderGGUFDisTorchMultiGPU",
"_meta": {
"title": "UnetLoaderGGUFDisTorchMultiGPU"
}
},
"128": {
"inputs": {
"vae_name": "qwen_image_vae.safetensors",
"device": "cuda:1"
},
"class_type": "VAELoaderMultiGPU",
"_meta": {
"title": "VAELoaderMultiGPU"
}
},
"129": {
"inputs": {
"offload_model": true,
"offload_cache": true,
"anything": [
"98",
0 0
] ]
}, },
"class_type": "ModelSamplingAuraFlow", "class_type": "VRAMCleanup",
"_meta": { "_meta": {
"title": "ModelSamplingAuraFlow" "title": "🎈VRAM-Cleanup"
}
},
"73": {
"inputs": {
"lora_name": "Qwen-Image-Lightning-8steps-V1.0.safetensors",
"strength_model": 1,
"model": [
"80",
0
]
},
"class_type": "LoraLoaderModelOnly",
"_meta": {
"title": "LoraLoaderModelOnly"
}
},
"80": {
"inputs": {
"unet_name": "qwen-image-Q4_K_S.gguf"
},
"class_type": "UnetLoaderGGUF",
"_meta": {
"title": "Load Checkpoint"
} }
} }
} }