Track OpenAI and Anthropic costs per Make.com scenario — no plugin, no code.
PromptCost is a drop-in proxy. Swap Make's OpenAI/Anthropic module for the built-in HTTP module, point it at PromptCost, and every API call gets tagged to the scenario that ran it.
Free forever · No card · Indie plan $9/mo for first 50 users
Why Make.com leaves you in the dark
Make's OpenAI and Anthropic modules don't expose per-scenario spend. You can run a dozen scenarios on one provider key and end up with a single monthly invoice that doesn't break down by scenario. When something spikes, you have no idea which automation is responsible.
PromptCost tags every request with a scenario name, so the dashboard shows you cost per scenario in real time — and lets you cap each one with a hard monthly budget.
Setup in 60 seconds
Get a PromptCost key
Sign up free at admin.promptcost.io, create a workspace, and generate an sk-pc- key.
Use HTTP > Make a request
In your scenario, replace the OpenAI/Anthropic module with Make's built-in HTTP > Make a request module.
# Anthropic URL: https://api.promptcost.io/anthropic/v1/messages Method: POST # OpenAI URL: https://api.promptcost.io/openai/v1/chat/completions Method: POST
Add the headers
Content-Type: application/json x-api-key: sk-ant-•••••••••• # your provider key cg-key: sk-pc-••••3f9a # promptcost key cg-agent: "lead-scorer" # your scenario name
The body is identical to the provider's API. Paste the same JSON you were sending before.
Run the scenario
Open the PromptCost dashboard. Every call logs cost, tokens, latency, and model — grouped by the cg-agent name you passed.
What you get
- Per-scenario cost breakdown — see exactly which Make scenario is burning the budget.
- Hard budget caps — set a monthly USD limit per scenario; PromptCost returns a 429 before forwarding to the provider, so you never pay over your cap.
- Full request log — model, tokens, cost, latency, timestamp. Filterable, exportable.
- Zero key storage — your OpenAI/Anthropic keys pass through as headers; we never persist them.
- Works in Make's free tier — the HTTP module is included on every plan.
FAQ
Do I lose Make's "Parse JSON" convenience?
No. The HTTP module returns JSON and downstream modules can reference it the same way. You can also enable "Parse response" on the HTTP module itself.
What about streaming?
Make's HTTP module doesn't stream, so you'll use non-streaming responses. PromptCost handles both.
Can I track multiple scenarios with one PromptCost key?
Yes — pass a different cg-agent value per scenario. Each one shows up as its own row on the dashboard.