paypertoken gives you OpenAI-compatible LLM access paid with USDC micropayments. No signup, no API keys.
paypertoken.devcurl https://paypertoken.dev/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}]
}'
This requires an active tempo session. See Payment below.
paypertoken uses the mppx / tempo protocol instead of API keys. Here's the flow:
Any mppx-compatible client handles this automatically. You just point it at paypertoken.dev and send requests.
Payments use USDC on Base. Testnet mode available via the TEMPO_TESTNET flag.
/healthHealth check endpoint.
{ "status": "ok" }
/v1/modelsReturns all available models with pricing. No payment required.
{
"object": "list",
"data": [
{
"id": "gpt-4o-mini",
"object": "model",
"owned_by": "openai",
"pricing": { "input_per_1m": 0.15, "output_per_1m": 0.6, "currency": "USD" }
}
]
}
/v1/chat/completionsCreate a chat completion. Requires an active tempo session (returns 402 if missing).
| Field | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model ID (see Models & Pricing) |
messages | array | Yes | Array of {role, content} objects |
temperature | number | No | Sampling temperature (0–2) |
max_tokens | number | No | Max output tokens (default 4096) |
top_p | number | No | Nucleus sampling |
frequency_penalty | number | No | Frequency penalty (−2 to 2) |
presence_penalty | number | No | Presence penalty (−2 to 2) |
stop | string | string[] | No | Stop sequences |
tools | array | No | Tool/function definitions |
tool_choice | string | object | No | Tool selection strategy |
n | number | No | Number of completions |
stream is always forced to true with include_usage: true. Response is an SSE stream of OpenAI-compatible chunks.
/v1/chat/completions (no body)Management operations. Send a POST with no body for session open/close/voucher handling. The tempo protocol headers control the operation.
All prices include a 10% service fee over upstream provider costs. The cheapest available provider is auto-selected per model.
| Model | Input / 1M tokens | Output / 1M tokens |
|---|---|---|
gpt-4o-mini | $0.165 | $0.660 |
google/gemini-2.5-flash | $0.165 | $0.660 |
meta-llama/llama-4-maverick | $0.220 | $0.660 |
anthropic/claude-haiku-4 | $0.880 | $4.40 |
o3-mini | $1.21 | $4.84 |
gpt-4o | $2.75 | $11.00 |
google/gemini-2.5-pro | $1.38 | $11.00 |
anthropic/claude-sonnet-4 | $3.30 | $16.50 |
# Open a tempo session first, then:
curl https://paypertoken.dev/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}]
}'
from openai import OpenAI
client = OpenAI(
base_url="https://paypertoken.dev/v1",
api_key="unused", # auth via tempo session
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
import OpenAI from 'openai'
const client = new OpenAI({
baseURL: 'https://paypertoken.dev/v1',
apiKey: 'unused', // auth via tempo session
})
const completion = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
})
console.log(completion.choices[0].message.content)
All errors return JSON with an error object containing message and type fields.
| Status | Type | Description |
|---|---|---|
400 | invalid_request_error | Invalid JSON or schema validation failure |
402 | payment_error / budget_exceeded | No tempo session or budget exceeded |
404 | invalid_request_error | Model not found |
413 | invalid_request_error | Request body exceeds 1MB |
429 | upstream_error | Upstream provider rate limited |
500 | server_error | Internal server error |
502 | upstream_error | Upstream provider unavailable |