Track AI costs per workflow in n8n — without writing a single line of code.
PromptCost is a drop-in proxy for OpenAI and Anthropic. Point n8n's HTTP Request node at PromptCost, add two headers, and every call gets tracked: cost, tokens, latency, model — broken down by workflow.
Free forever · No card · Indie plan $9/mo for first 50 users
Why n8n users hit a wall on AI costs
n8n's AI nodes don't surface per-workflow spend. You can run twelve workflows against the same OpenAI or Anthropic key, and at the end of the month all you get is one invoice. Which workflow blew the budget? You don't know.
PromptCost tags every request with an agent name, so the dashboard shows you exactly which workflow spent what — and lets you set hard budget caps that block runaway spend automatically.
Setup in 60 seconds
Get a PromptCost key
Sign up free at admin.promptcost.io, create a workspace, and generate an sk-pc- key.
Use the HTTP Request node
In your n8n workflow, replace the AI node (or wrap it) with a generic HTTP Request node pointing at:
# Anthropic POST https://api.promptcost.io/anthropic/v1/messages # OpenAI POST https://api.promptcost.io/openai/v1/chat/completions
Add two headers
x-api-key: sk-ant-•••••••••• # your provider key cg-key: sk-pc-••••3f9a # promptcost key cg-agent: "lead-scorer" # your workflow name
The body is identical to the provider's API. Nothing else changes.
Watch per-workflow spend
Open the PromptCost dashboard. Every call logs cost, tokens in/out, latency, and model — grouped by the cg-agent name you passed.
What you get
- Per-workflow cost breakdown — see exactly which n8n workflow is burning the budget.
- Hard budget caps — set a monthly USD limit per workflow; PromptCost returns 429 before forwarding to the provider, so you never pay over your cap.
- Full request log — model, tokens, cost, latency, timestamp. Filterable, exportable.
- Zero key storage — your OpenAI/Anthropic keys pass through as headers; we never persist them.
- ~5–15ms overhead — async logging never blocks the response path.
FAQ
Does this work with n8n's native AI nodes?
n8n's native nodes don't expose a custom base URL. Use the HTTP Request node instead — it's a one-time swap and you keep full control of the request.
What about streaming responses?
Streaming works. The proxy passes through the SSE stream and logs the final usage when the stream completes.
Self-hosted n8n?
Yes. PromptCost is a hosted proxy — your n8n instance just needs outbound HTTPS to api.promptcost.io.