AI-powered prompt improvement and grading endpoints — no API key required, no signup.
Base URL: http://143.198.136.81:8802 |
Rate limit: 20 requests / hour per IP |
Auth: None required |
Format: JSON
Improve any weak or vague prompt using AI. Returns the rewritten prompt alongside token usage and cost. Powered by Claude Haiku.
| Field | Type | Description |
|---|---|---|
| prompt | string | The prompt to improve. Max 500 characters. |
| Field | Type | Description |
|---|---|---|
| original | string | The prompt you submitted. |
| improved | string | The AI-rewritten, more specific prompt. |
| tokens_used | integer | Total input + output tokens consumed. |
| cost_usd | float | Actual cost in USD for this call. |
| share_url | string | Shareable permalink to this result. |
curl -s -X POST http://143.198.136.81:8802/tools/improve -H "Content-Type: application/json" -d '{"prompt": "write a blog post about AI"}'
{
"original": "write a blog post about AI",
"improved": "Write a 600-word blog post for a technical audience explaining how large language models generate text. Include one concrete analogy, three practical use cases, and a short conclusion with a call to action.",
"tokens_used": 87,
"cost_usd": 0.00032,
"share_url": "http://143.198.136.81:8802/share/abc123"
}
Grades a prompt A–F across five quality dimensions: context, specificity, constraints, examples, and goal clarity. Returns a total score out of 10, a letter grade, the top improvement action, and a fully rewritten improved prompt.
| Field | Type | Description |
|---|---|---|
| prompt | string | The prompt to grade. Max 1000 characters. |
| Field | Type | Description |
|---|---|---|
| original | string | The prompt you submitted. |
| total_score | integer | Score from 0–10. |
| max_score | integer | Always 10. |
| grade | string | Letter grade: A, B, C, D, or F. |
| scores | object | Per-dimension scores (0–2 each): context, specificity, constraints, examples, goal_clarity. |
| top_improvement | string | The single most impactful change you could make. |
| improved_prompt | string | A fully rewritten version of your prompt with all gaps filled. |
| cost_usd | float | Actual cost in USD for this call. |
| share_url | string | Shareable permalink to this result. |
curl -s -X POST http://143.198.136.81:8802/tools/grade -H "Content-Type: application/json" -d '{"prompt": "summarize this document"}'
{
"original": "summarize this document",
"total_score": 3,
"max_score": 10,
"grade": "D",
"scores": {
"context": 1, "specificity": 0,
"constraints": 0, "examples": 1, "goal_clarity": 1
},
"top_improvement": "Specify the desired output length and format (e.g. bullet list, max 200 words)",
"improved_prompt": "Summarize the following document in 3–5 bullet points, each under 30 words. Focus on key decisions, outcomes, and action items. Omit background context.",
"cost_usd": 0.00041,
"share_url": "http://143.198.136.81:8802/share/def456"
}
Runs your prompt through both Claude Haiku and Claude Sonnet in parallel and returns both responses alongside a side-by-side cost comparison. Useful for deciding which model tier is sufficient for your use case. Requires a PromptLab access key (use demo to try it).
| Field | Type | Description |
|---|---|---|
| prompt | string | The prompt to run through both models. |
| api_key | string | PromptLab access key. Use "demo" for free access. |
| Field | Type | Description |
|---|---|---|
| results.haiku | object | Haiku response, token counts, cost_usd, latency_ms. |
| results.sonnet | object | Sonnet response, token counts, cost_usd, latency_ms. |
| cost_comparison | object | haiku_cost, sonnet_cost, savings_with_haiku_usd, savings_with_haiku_pct. |
curl -s -X POST http://143.198.136.81:8802/promptlab/compare -H "Content-Type: application/json" -d '{"prompt": "Explain async/await in Python in one paragraph.", "api_key": "demo"}'
{
"prompt": "Explain async/await in Python in one paragraph.",
"results": {
"haiku": { "response": "...", "cost_usd": 0.00012, "latency_ms": 480 },
"sonnet": { "response": "...", "cost_usd": 0.00063, "latency_ms": 820 }
},
"cost_comparison": {
"haiku_cost": 0.00012,
"sonnet_cost": 0.00063,
"savings_with_haiku_usd": 0.00051,
"savings_with_haiku_pct": 81.0
}
}
Drop this into any script to grade prompts programmatically before sending them to a production LLM. It adds about 0.4 ms of overhead and costs under $0.001 per call.
import requests
def grade_prompt(prompt: str) -> dict:
"""
Grade a prompt A-F using the PromptLab free API.
Returns grade, total_score/10, and a rewritten improved_prompt.
"""
url = "http://143.198.136.81:8802/tools/grade"
resp = requests.post(url, json={"prompt": prompt}, timeout=15)
resp.raise_for_status()
return resp.json()
# Example: fail fast if prompt quality is below a threshold
result = grade_prompt("summarize this document")
if result["grade"] in ("D", "F"):
print(f"Weak prompt (grade {result['grade']}, score {result['total_score']}/10)")
print(f"Top fix: {result['top_improvement']}")
print(f"Suggested rewrite:
{result['improved_prompt']}")
else:
print(f"Prompt grade: {result['grade']} ({result['total_score']}/10) — sending to model.")
# proceed with your LLM call here
/tools/improve and /tools/grade./promptlab/compare requires a PromptLab access key — use demo to get started./tools/improve, 1000 characters for /tools/grade.PromptLab packs cover product management, software development, and startup strategy — battle-tested and ready to drop into your workflow.
Browse prompt packs at PromptLab