PromptLab Public API Free

AI-powered prompt improvement and grading endpoints — no API key required, no signup.

Base URL: http://143.198.136.81:8802   |   Rate limit: 20 requests / hour per IP   |   Auth: None required   |   Format: JSON

Endpoints

POST /tools/improve

Improve any weak or vague prompt using AI. Returns the rewritten prompt alongside token usage and cost. Powered by Claude Haiku.

FieldTypeDescription
promptstringThe prompt to improve. Max 500 characters.
FieldTypeDescription
originalstringThe prompt you submitted.
improvedstringThe AI-rewritten, more specific prompt.
tokens_usedintegerTotal input + output tokens consumed.
cost_usdfloatActual cost in USD for this call.
share_urlstringShareable permalink to this result.
curl -s -X POST http://143.198.136.81:8802/tools/improve   -H "Content-Type: application/json"   -d '{"prompt": "write a blog post about AI"}'
{
  "original": "write a blog post about AI",
  "improved": "Write a 600-word blog post for a technical audience explaining how large language models generate text. Include one concrete analogy, three practical use cases, and a short conclusion with a call to action.",
  "tokens_used": 87,
  "cost_usd": 0.00032,
  "share_url": "http://143.198.136.81:8802/share/abc123"
}
POST /tools/grade

Grades a prompt A–F across five quality dimensions: context, specificity, constraints, examples, and goal clarity. Returns a total score out of 10, a letter grade, the top improvement action, and a fully rewritten improved prompt.

FieldTypeDescription
promptstringThe prompt to grade. Max 1000 characters.
FieldTypeDescription
originalstringThe prompt you submitted.
total_scoreintegerScore from 0–10.
max_scoreintegerAlways 10.
gradestringLetter grade: A, B, C, D, or F.
scoresobjectPer-dimension scores (0–2 each): context, specificity, constraints, examples, goal_clarity.
top_improvementstringThe single most impactful change you could make.
improved_promptstringA fully rewritten version of your prompt with all gaps filled.
cost_usdfloatActual cost in USD for this call.
share_urlstringShareable permalink to this result.
curl -s -X POST http://143.198.136.81:8802/tools/grade   -H "Content-Type: application/json"   -d '{"prompt": "summarize this document"}'
{
  "original": "summarize this document",
  "total_score": 3,
  "max_score": 10,
  "grade": "D",
  "scores": {
    "context": 1, "specificity": 0,
    "constraints": 0, "examples": 1, "goal_clarity": 1
  },
  "top_improvement": "Specify the desired output length and format (e.g. bullet list, max 200 words)",
  "improved_prompt": "Summarize the following document in 3–5 bullet points, each under 30 words. Focus on key decisions, outcomes, and action items. Omit background context.",
  "cost_usd": 0.00041,
  "share_url": "http://143.198.136.81:8802/share/def456"
}
POST /promptlab/compare

Runs your prompt through both Claude Haiku and Claude Sonnet in parallel and returns both responses alongside a side-by-side cost comparison. Useful for deciding which model tier is sufficient for your use case. Requires a PromptLab access key (use demo to try it).

FieldTypeDescription
promptstringThe prompt to run through both models.
api_keystringPromptLab access key. Use "demo" for free access.
FieldTypeDescription
results.haikuobjectHaiku response, token counts, cost_usd, latency_ms.
results.sonnetobjectSonnet response, token counts, cost_usd, latency_ms.
cost_comparisonobjecthaiku_cost, sonnet_cost, savings_with_haiku_usd, savings_with_haiku_pct.
curl -s -X POST http://143.198.136.81:8802/promptlab/compare   -H "Content-Type: application/json"   -d '{"prompt": "Explain async/await in Python in one paragraph.", "api_key": "demo"}'
{
  "prompt": "Explain async/await in Python in one paragraph.",
  "results": {
    "haiku":  { "response": "...", "cost_usd": 0.00012, "latency_ms": 480 },
    "sonnet": { "response": "...", "cost_usd": 0.00063, "latency_ms": 820 }
  },
  "cost_comparison": {
    "haiku_cost": 0.00012,
    "sonnet_cost": 0.00063,
    "savings_with_haiku_usd": 0.00051,
    "savings_with_haiku_pct": 81.0
  }
}

Python Integration Example

Drop this into any script to grade prompts programmatically before sending them to a production LLM. It adds about 0.4 ms of overhead and costs under $0.001 per call.

import requests

def grade_prompt(prompt: str) -> dict:
    """
    Grade a prompt A-F using the PromptLab free API.
    Returns grade, total_score/10, and a rewritten improved_prompt.
    """
    url = "http://143.198.136.81:8802/tools/grade"
    resp = requests.post(url, json={"prompt": prompt}, timeout=15)
    resp.raise_for_status()
    return resp.json()


# Example: fail fast if prompt quality is below a threshold
result = grade_prompt("summarize this document")

if result["grade"] in ("D", "F"):
    print(f"Weak prompt (grade {result['grade']}, score {result['total_score']}/10)")
    print(f"Top fix: {result['top_improvement']}")
    print(f"Suggested rewrite:
{result['improved_prompt']}")
else:
    print(f"Prompt grade: {result['grade']} ({result['total_score']}/10) — sending to model.")
    # proceed with your LLM call here

Rate Limits & Fair Use

Need 130+ production-ready prompts?

PromptLab packs cover product management, software development, and startup strategy — battle-tested and ready to drop into your workflow.

Browse prompt packs at PromptLab