Transform simple ideas into production-ready prompts
Quality Prompts is a free, serverless tool that transforms simple ideas into high-quality, production-ready prompts. You provide a short description of what you want, and the tool builds a detailed, structured prompt for you. It adds objectives, constraints, deliverables, evaluation criteria, and edge case handling automatically.
Select a subject type, then optionally choose a prompt style to tailor the output for a specific workflow. Choose a target model class to optimize prompt length and complexity. Type a short idea into the text field, a sentence or two is ideal. Click Generate Prompt and the tool returns your optimized prompt in three formats: Plain Text for copy-pasting, Structured with markdown, and JSON for agents and APIs.
Development and Build Based On prompts instruct models to use JSON objects for configuration, state management, and data structures. This promotes composability (JSON objects can be merged and extended), extensibility (new properties don't break existing code), and interoperability (JSON is the universal interchange format). When you generate a Development or Build Based On prompt, the output will guide the target model to organize data in structured JSON rather than scattered variables.
Every subject type has specialized prompt styles that adapt the generated prompt for different workflows. When you select a subject type, the Prompt Style dropdown appears with options tailored to that category. Select General to use the default dimensions without a specific style.
Development — Specification Prompt for new projects with no prior context. Iteration Prompt for targeted changes to existing code. Diagnostic Prompt for debugging unknown failures. Serverless (Multi-Cloud) for AWS Lambda, GCP Functions, Azure, or Cloudflare with IaC tooling. Vercel (Next.js/Edge) for Vercel-deployed apps with framework integration and edge functions. Blockchain/Web3 for smart contracts and decentralized applications. Jekyll Blog Site for static blogs with installation instructions. HTML/CSS/JS (GitHub Pages) for simple static sites with no build tools.
Writing — Creative Writing (Long-Form) for essays, articles, and narrative work. Short-Form Copy for ads, taglines, and microcopy. Marketing Communications for emails, newsletters, and press releases.
Strategy — Business Strategy for competitive positioning and growth planning. Go-to-Market Strategy for launch plans and pricing. Technical Strategy for architecture decisions and technology selection.
Product — Product Requirements Document for full PRDs with acceptance criteria. User Stories for sprint-ready stories in standard format. Feature Specification for detailed single-feature specs covering all states.
Design — UI/UX Design for wireframes, flows, and component specs. Design Assets for logos, icons, and visual elements. Photo Editing for image modification and compositing instructions.
Marketing — Campaign Planning for multi-channel campaigns with budgets and KPIs. Content Strategy for editorial calendars and content pillars. Social Media for platform-specific strategies and posts.
Research — Literature Review for systematic source synthesis and gap analysis. User Research for interview guides, surveys, and usability studies. Market Research for competitive analysis and market sizing.
Data Analysis — Exploratory Analysis for data profiling and pattern discovery. Dashboard & Reporting for KPI dashboards with metric definitions. Statistical Modeling for regression, hypothesis testing, and model validation.
Build Based On — Requires a URL to analyze. Replicate for building a new site inspired by an existing reference. Extend for adding new features while maintaining design consistency. Improve for implementing fixes to design, performance, accessibility, or UX. Tech stack options: As Serverless (Multi-Cloud), As Vercel (Next.js/Edge), As Blockchain/Web3, As Jekyll Site, or As HTML/CSS/JS.
Quality Prompts supports multiple API providers. Select your provider in the API Settings panel and enter your API key.
Puter GPT-OSS is the default and is completely free with no API key required. It uses Puter's user-pays model, which means you as a developer or site owner pay nothing. Instead, each person who uses the tool covers their own AI inference cost through their Puter account. When a user first triggers an AI request, Puter handles authentication automatically. Users get a free usage allowance, and any usage beyond that is billed to their own Puter account, not yours. This means the tool can scale to unlimited users at zero cost to the host. The gpt-oss-120b model (117B parameters) produces the best results but is slower. The gpt-oss-20b model (21B parameters) is faster with lower quality. Both are open-weight models from OpenAI running on Puter's infrastructure.
OpenRouter is recommended for users who want access to hundreds of models (Claude, GPT, Gemini, Llama, Mistral, and more) through a single API key. It is CORS-friendly and works directly in the browser without a proxy. Get a key at openrouter.ai/keys.
Anthropic connects directly to the Claude Messages API. Get a key at console.anthropic.com.
OpenAI connects to the Chat Completions API. Get a key at platform.openai.com.
Google Gemini connects to the Gemini API. Get a key at aistudio.google.com.
Ollama runs AI models locally on your machine with gpt-oss:20b as the default model. No API key needed, no data leaves your computer, completely free. Ollama is the recommended fallback if Puter's free tier runs out — Quality Prompts will automatically suggest it with OS-specific setup instructions (macOS, Windows, Linux) if Puter encounters an error. This works from any URL including GitHub Pages — you just need to start Ollama with OLLAMA_ORIGINS=* to allow browser access. Click Setup Instructions in the Ollama settings for a full guide with OS-specific steps. Quality Prompts checks your installed models and verifies the connection before sending the request.
Custom Endpoint supports any OpenAI-compatible API including Together, LM Studio, and other self-hosted models. Enter your base URL and it uses the /chat/completions format with Bearer token auth.
Keep your idea brief. The whole point of this tool is to do the heavy lifting for you, so you do not need to write a full prompt yourself. Focus on what you want accomplished, not on how to prompt for it. The more specific your idea is, the better the output will be. For example, "build a dashboard that tracks user retention by cohort" will produce a much better prompt than just "build a dashboard."
The Share Idea button (above Generate Prompt) opens a modal where you can copy a prefilled URL of your idea. Choose "Copy link" to share a URL that prefills the idea for the recipient, or "Copy link with auto-generate" to share a URL that also triggers prompt generation automatically when opened.
Once a prompt is generated, the Use this prompt section at the bottom lets you open the generated prompt directly in ChatGPT, Claude, Copilot, or Gemini. ChatGPT receives the prompt automatically. For Claude, Copilot, and Gemini, the prompt is copied to your clipboard and the service opens in a new tab — just paste it when the page loads.
You can also Share via Email from the Plain Text tab to send the full generated prompt by email.
The Plain Text tab gives you a clean, readable prompt you can paste directly into any chat interface. The Structured tab formats the same prompt with markdown headings, sections, and bullet points for easy scanning. The JSON tab provides a programmatic format with separate fields for system instructions, user prompts, constraints, and evaluation criteria, which is useful for agents and API integrations.
You can prefill the idea input via URL. Add ?prompt=your idea here to the URL to prefill, or add &enter to also auto-generate the prompt on page load. The bare format ?=your idea here also works. This is useful for bookmarks, integrations, and the Share Idea feature.
All processing happens through the selected API provider. No data is stored on any server.
python3 -m http.server 8000 then open http://localhost:8000
Powered by Puter.com — free, no API key needed.
Runs locally on your machine — no API key, no data leaves your computer.
Ollama lets you run AI models locally on your own machine. No API key needed, no data leaves your computer, and it's completely free. This works whether you're running Quality Prompts locally or from https://97115104.github.io/qualityprompts/ — your browser connects directly to Ollama on your machine.
Browser requirement: When using Ollama from GitHub Pages (or any HTTPS URL), Google Chrome is required. Safari and other WebKit browsers block HTTPS pages from connecting to local HTTP servers. On localhost, any browser works.
curl -fsSL https://ollama.com/install.sh | sh
Or download the macOS app from ollama.com/download.
ollama list
If you see gpt-oss:20b or any gpt-oss model, skip to step 4. Otherwise continue.
ollama pull gpt-oss:20b
First, stop any running Ollama instance. If you see address already in use when running ollama serve, Ollama is already running and must be stopped first:
# If the desktop app is running, click the menu bar icon → Quit Ollama # Or kill the process: pkill ollama
Then start Ollama with browser access:
OLLAMA_ORIGINS=* ollama serve
To make this permanent so you never have to set it manually, add to your ~/.zshrc (or ~/.bashrc):
export OLLAMA_ORIGINS=*
Then restart your terminal and the Ollama desktop app will use this setting automatically.
Download the Windows installer from ollama.com/download and run it.
Open PowerShell and run:
ollama list
If you see gpt-oss:20b or any gpt-oss model, skip to step 4. Otherwise continue.
ollama pull gpt-oss:20b
First, stop any running Ollama instance. If you see address already in use, Ollama is already running. Close it from the system tray (right-click → Quit), or in PowerShell:
Stop-Process -Name ollama -Force
Then start with browser access:
$env:OLLAMA_ORIGINS="*"; ollama serve
To make this permanent so Ollama always allows browser access:
[System.Environment]::SetEnvironmentVariable("OLLAMA_ORIGINS", "*", "User")
Then restart the Ollama desktop app — it will use this setting automatically.
curl -fsSL https://ollama.com/install.sh | sh
ollama list
If you see gpt-oss:20b or any gpt-oss model, skip to step 4. Otherwise continue.
ollama pull gpt-oss:20b
If Ollama is running as a systemd service (most common on Linux), configure the environment variable and restart:
sudo systemctl edit ollama
Add the following under [Service]:
[Service] Environment="OLLAMA_ORIGINS=*"
Then restart the service:
sudo systemctl restart ollama
If you see address already in use when running manually, stop the service first:
sudo systemctl stop ollama
Then run directly in your terminal:
OLLAMA_ORIGINS=* ollama serve
http://localhost:11434Browsers enforce a security policy called CORS that blocks web pages from making requests to different servers. When you use Quality Prompts from https://97115104.github.io (or any URL that isn't localhost), your browser needs Ollama to explicitly allow those requests. Setting OLLAMA_ORIGINS=* tells Ollama to accept requests from any web page, including GitHub Pages. This is safe because Ollama is only accessible on your local machine.
"address already in use": Ollama is already running (usually the desktop app). You need to stop it first, then restart with OLLAMA_ORIGINS=*. On macOS: quit from the menu bar icon or run pkill ollama. On Windows: close from the system tray or run Stop-Process -Name ollama -Force. On Linux: sudo systemctl stop ollama. Then run OLLAMA_ORIGINS=* ollama serve.
CORS error / Failed to fetch from GitHub Pages: This means OLLAMA_ORIGINS=* is not set. You must stop Ollama and restart it with the environment variable — it cannot be changed while running. See the instructions for your OS above.
Connection refused: Ollama is not running. Start it with OLLAMA_ORIGINS=* ollama serve and check the URL is http://localhost:11434.
Model not found: Run ollama list to see installed models. Pull the model you need with ollama pull gpt-oss:20b.
Enter the URL of a site you want to use as a reference for building, extending, or improving. The prompt will include analysis guidance.
Share your idea as a prefilled Quality Prompts link.
The gpt-oss-120b model (117B parameters) produces the highest quality prompts but can take 30-60 seconds to respond. If speed is more important than quality, switch to the smaller model:
The 20b model is roughly 2-3x faster with slightly lower quality. Both models are free through Puter.
You can run GPT-OSS locally on your machine for free using Ollama. No API key, no usage limits, no data leaves your computer. This works from any URL including GitHub Pages.
Browser requirement: When using Ollama from GitHub Pages (or any HTTPS URL), Google Chrome is required. Safari and other WebKit browsers block HTTPS pages from connecting to local HTTP servers.
1. Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
2. Check for existing models and pull GPT-OSS:
ollama list ollama pull gpt-oss:20b
3. Stop the running Ollama instance, then restart with browser access:
# Quit from menu bar icon, or: pkill ollama OLLAMA_ORIGINS=* ollama serve
To make permanent, add export OLLAMA_ORIGINS=* to your ~/.zshrc and restart the desktop app.
1. Download and install from ollama.com/download
2. Open PowerShell, check for models and pull GPT-OSS:
ollama list ollama pull gpt-oss:20b
3. Stop the running Ollama instance, then restart with browser access:
# Close from system tray, or: Stop-Process -Name ollama -Force $env:OLLAMA_ORIGINS="*"; ollama serve
To make permanent: [System.Environment]::SetEnvironmentVariable("OLLAMA_ORIGINS", "*", "User") then restart the app.
1. Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
2. Check for existing models and pull GPT-OSS:
ollama list ollama pull gpt-oss:20b
3. If running as a service, configure and restart:
sudo systemctl edit ollama # Add: Environment="OLLAMA_ORIGINS=*" sudo systemctl restart ollama
Or stop the service and run manually:
sudo systemctl stop ollama OLLAMA_ORIGINS=* ollama serve
4. Click the button below — it sets the provider to Ollama with the right defaults:
Check the quality of this prompt our other tool Assess Prompts. It provides free expert feedback, optimization suggestions, and cost estimates.