Live · Sonnet 4.6 · ~30s avg latency

Slow Postgres query?
Senior-DBA-grade review in seconds.

Paste a SQL query, an EXPLAIN ANALYZE plan, or a pgBadger slow-query log. We return a markdown review with index DDL, query rewrites, write-path impact, and a verification command. No data leaves your side — schema-only, query-text-only.

Analyze a query — free → How it works

// First 3 reviews per month free · No card · No ChatGPT screenshot in your audit log

# a real review (Postgres 15, 12M rows, e-commerce app)
you@you $ qd analyze ./slow_query.sql ./schema.sql
→ accepted. review #1042. model claude-sonnet-4-6. eta ~30s

qd-1042 delivered in 26.4s
# bottleneck: full scan on orders (12.4M rows) — predicate (status, created_at)
# the existing single-column index on created_at is not selective with status filter

-- existing index (insufficient)
CREATE INDEX idx_orders_created_at ON orders (created_at)
-- recommended (partial index for the hot status set)
CREATE INDEX CONCURRENTLY idx_orders_status_created_user
  ON orders (status, created_at DESC, user_id)
  WHERE status IN ('paid','shipped','delivered');

⚠ write-path: ~3.8ms per INSERT (measured against your schema shape).
   p95 latency: 1,820ms → 4ms. confidence: 0.91.
   ship: CREATE CONCURRENTLY (no AccessExclusive lock). rollback: DROP CONCURRENTLY.

→ also: rewrite suggested. CTE materializes products WHERE category_id=47 once
        instead of per-row subquery probe. expected -38% planner overhead.
→ verify: EXPLAIN (ANALYZE, BUFFERS) <your-query>
237
Reviews this week
~28s
P50 review latency
$49/mo
Unlimited tier
0 rows
Customer data we ever see

01How it works

Three steps. No human in the loop, no scheduling, no Slack ping back-and-forth. Built so a senior backend engineer can paste, ship, and move on inside their morning coffee.

// 01

Paste

Drop in your slow query, an EXPLAIN ANALYZE plan, or a pgBadger log excerpt. No schema dump required for query-only reviews. No data ever leaves you.

// 02

Review

An LLM agent fine-tuned on Postgres internals classifies your input (SQL · EXPLAIN · log), runs the appropriate review prompt, and returns markdown. Median 28s.

// 03

Ship

You get a Git-ready diff: index DDL, query rewrite, write-path cost, p95 estimate, rollback plan, and the EXPLAIN node that drove every recommendation.

02What's actually under the hood

A short, honest description. We're an LLM wrapper with senior-DBA-grade prompts. The moat is the workflow, not the model.

// MODEL

Claude Sonnet 4.6

Best price/perf for technical analysis. Switched off Sonnet 4.5 in March 2026 after benchmarks against pgvector + EXPLAIN-tree problems showed +12% accuracy.

// PROMPTS

Three review modes

SQL-only review · EXPLAIN-plan review · pgBadger log review. Each has a custom prompt with PG-version-aware caveats and partial / expression / covering-index recipes.

// GUARDS

Migration safety net

Every index recommendation defaults to CREATE INDEX CONCURRENTLY. Every rewrite ships with a rollback. Plans needing an AccessExclusive lock surface the warning above the recommendation, not buried in a footnote.

// HISTORY

Audit trail by default

Every review gets a stable /r/QD-#### URL, saved 90 days, exportable as JSON. Open a PR with the URL. Revisit when the table grows 10×. ChatGPT's free tier doesn't keep this.

// PRIVACY

Schema and query, never data

We accept SQL text, schema DDL, and pg_stat_statements rows. We do not accept (or ask for) row data. Vendor security review: usually a 30-minute conversation, not a 6-week ordeal.

// LIMITS

What we don't do

We don't replace pganalyze (continuous monitoring). We don't replace your DBA on a live incident. We do replace the 90-minute Stack Overflow rabbit hole when you're staring at a slow query at 2pm.

03Pricing

Card-free Free tier. Solo dev expense-able under personal-card threshold. No "database size" tax, no seats, no enterprise haggling.

Free

$0 · forever

3 reviews per month. No card, no email tracking, no upsell modal at minute 30.

  • 3 reviews / month
  • Web UI only
  • 30-day saved history
  • Same model as paid tiers

Team · proposed

$149 / mo

Five seats, shared review library, Slack integration. We're validating demand — join the list and you'll be charter.

  • 5 seats, shared library
  • Slack /qd integration
  • SSO (Google + Microsoft)
  • 10 req/sec API
  • Prioritized reviewer queue

// Stripe-billed. Cancel inside the dashboard, no email gauntlet. Tier reshuffle pending the MiroFish price-tier sim — final tiers may shift but charter rate is locked for early subs.

04FAQ

Is this just an LLM wrapper?

Honest answer: yes — a Claude Sonnet 4.6 wrapper, with three differences that matter. (1) The prompt is purpose-built for Postgres EXPLAIN-plan analysis, not "be helpful." (2) Anthropic's API has a no-train clause for our org; your queries do not feed any model's training data. (3) Every review is saved, URL-stable, and exportable — your audit log doesn't depend on your screenshot habit. The model is a commodity. The workflow is the product.

Do I have to share my data?

No. We accept the SQL text, optionally a schema-only dump (pg_dump --schema-only), and pg_stat_statements rows. No customer rows, no PII. We do not log raw query text, never train on it, and the only third party that sees the text is the Anthropic Claude API (covered by Anthropic's no-train clause for our org).

Why not just hire a DBA consultant?

You should — for a complex, ongoing engagement. For "this one query is slow and I want a senior opinion before I ship the index migration tomorrow," a $300–$600/hour consultant is the wrong tool. We're an order of magnitude cheaper for the bounded problem and an order of magnitude faster.

What Postgres versions do you handle?

11 through 17. Reviews land for Aurora PG, RDS, Supabase, Neon, Heroku Postgres, and self-hosted. If you're on something exotic (Citus, TimescaleDB, AlloyDB), tell us in the input — the model adjusts its recommendations accordingly.

What if the recommendation is wrong?

Two safeguards. First: every recommendation includes a confidence score and a "test on staging first" line. Second: if the fix doesn't work, re-submit with the EXPLAIN ANALYZE post-deploy and we iterate on the same review URL. Free tier counts the iteration as part of the same review (no extra credit consumed).

Where does the name come from?

Querk = "query" + "quirk" — the small unexpected detail in your EXPLAIN plan that's actually slowing you down. The 5-letter spelling makes the .io domain easy to remember and types fast on mobile.

(We were briefly called QueryDoctor; querydoctor.io still 301-redirects here for anyone with a stale bookmark.)

05Waitlist

Concierge-mode beta: first 30 teams to submit a real slow query get the fix free. Tell us what you’re running and we’ll reach out within 24 hours.