Expert AI feedback on your prompts. Obtain a quality score, optimization suggestions, and real cost estimates
Assess Prompts is a free, serverless tool that evaluates your prompts and returns expert feedback. Paste any prompt for any model or task and the tool analyzes it as a strategic advisor specializing in prompt engineering and AI-driven software development.
Each assessment returns a quality score (0–100) with a letter grade, a summary of strengths and issues, a list of missing elements, specific optimization suggestions, a rewritten optimized version of your prompt, and a cost estimate for running your prompt across all major frontier models and self-hosted alternatives.
Paste the prompt you want to assess into the text area. Add optional context to help the advisor understand the prompt's intended use, target model, or audience. Click Assess Prompt. The tool sends your prompt to a language model acting as a strategic prompt engineering advisor, which returns structured feedback in four formats: Assessment, Suggestions, Cost Estimate, and raw JSON.
Shows the overall score and grade, a 2-3 sentence expert summary, a list of strengths (what the prompt does well), a list of specific issues (what is wrong and why it matters), and a list of missing elements (high-impact additions).
Shows 3-6 specific, actionable optimization suggestions with titles and detailed explanations. Below the suggestions is an optimized version of your prompt with all improvements applied — ready to copy and use directly.
Shows the estimated token count for your prompt and the cost to run it once, 100 times, and 1,000 times across every major frontier model: Anthropic Claude, OpenAI GPT-4o, Google Gemini, xAI Grok, and Meta Llama. Also includes a note on self-hosted inference costs via Ollama or cloud GPU instances. Use this to make informed decisions about which model is right for your use case and scale.
After assessment, the Use optimized prompt section lets you open the improved version directly in ChatGPT, Claude, Copilot, or Gemini. ChatGPT receives the prompt automatically. For Claude, Copilot, and Gemini, the optimized prompt is copied to your clipboard and the service opens in a new tab, just paste it when the page loads.
Puter GPT-OSS is the default and is completely free with no API key required. It uses Puter's user-pays model, you as a developer pay nothing. Each user covers their own AI inference cost through their Puter account.
OpenRouter is recommended for users who want access to hundreds of models through a single API key. It is CORS-friendly and works directly in the browser. Get a key at openrouter.ai/keys.
Anthropic connects directly to the Claude Messages API. Get a key at console.anthropic.com.
OpenAI connects to the Chat Completions API. Get a key at platform.openai.com.
Google Gemini connects to the Gemini API. Get a key at aistudio.google.com.
Ollama runs AI models locally. No API key needed, no data leaves your computer. Start Ollama with OLLAMA_ORIGINS=* to allow browser access.
Custom Endpoint supports any OpenAI-compatible API. Enter your base URL and it uses the /chat/completions format with Bearer token auth.
Paste your full prompt exactly as you intend to use it — don't clean it up before assessing. The tool is more useful when it sees the real prompt, not an idealized version. Use the optional context field to explain what the prompt is for, which model you plan to use, and what a successful output looks like. This helps the advisor give targeted, relevant feedback.
Share prompts using hash-based URLs that support prompts of any length. The Share Prompt button generates compressed links like #p=...&enter that work with prompts of any size. Legacy query string URLs (?prompt=...) are also supported for shorter prompts.
All processing happens through the selected API provider. No data is stored on any server.
python3 -m http.server 8000 then open http://localhost:8000
Powered by Puter.com — free, no API key needed.
Runs locally on your machine — no API key, no data leaves your computer.
Ollama lets you run AI models locally on your own machine. No API key needed, no data leaves your computer, and it's completely free.
Browser requirement: When using Ollama from GitHub Pages (or any HTTPS URL), Google Chrome is required. Safari and other WebKit browsers block HTTPS pages from connecting to local HTTP servers.
curl -fsSL https://ollama.com/install.sh | sh
ollama pull gpt-oss:20b
pkill ollama OLLAMA_ORIGINS=* ollama serve
To make this permanent, add to your ~/.zshrc:
export OLLAMA_ORIGINS=*
Download the installer from ollama.com/download and run it.
ollama pull gpt-oss:20b
Stop-Process -Name ollama -Force $env:OLLAMA_ORIGINS="*"; ollama serve
To make permanent:
[System.Environment]::SetEnvironmentVariable("OLLAMA_ORIGINS", "*", "User")
curl -fsSL https://ollama.com/install.sh | sh
ollama pull gpt-oss:20b
sudo systemctl edit ollama
Add under [Service]:
[Service] Environment="OLLAMA_ORIGINS=*"
sudo systemctl restart ollama
http://localhost:11434Share your prompt as a prefilled Assess Prompts link.
The AI model flagged your prompt due to content policy restrictions. This typically happens with prompts that involve cloning websites, generating copyrighted content, or other sensitive topics.
You can automatically rephrase your prompt to focus on learning and educational purposes, which usually resolves this issue.
The gpt-oss-120b model (117B parameters) produces the most thorough assessments but can take 30-60 seconds. If speed is more important, switch to the smaller model:
The 20b model is roughly 2-3x faster with slightly lower quality. Both are free through Puter.
You can run GPT-OSS locally for free using Ollama. No API key, no usage limits, no data leaves your computer.
Browser requirement: When using Ollama from GitHub Pages (or any HTTPS URL), Google Chrome is required.
1. Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
2. Pull GPT-OSS and start with browser access:
ollama pull gpt-oss:20b pkill ollama OLLAMA_ORIGINS=* ollama serve
1. Download from ollama.com/download and install.
2. In PowerShell:
ollama pull gpt-oss:20b Stop-Process -Name ollama -Force $env:OLLAMA_ORIGINS="*"; ollama serve
1. Install and configure:
curl -fsSL https://ollama.com/install.sh | sh ollama pull gpt-oss:20b sudo systemctl edit ollama # Add: Environment="OLLAMA_ORIGINS=*" sudo systemctl restart ollama
3. Click below to switch Assess Prompts to Ollama:
| Model | Per Run | Per 100 Runs | Per 1,000 Runs |
|---|