Skip to main content
This guide shows how to route AI requests through Lava’s gateway programmatically. Every request is logged with token counts, costs, and provider details automatically.
Start by choosing an auth path: Authenticate an Agent explains the difference between the MCP flow (login inside the MCP) and the SDK flow (Lava.login() in your own code).
Managed vs unmanaged: You can use managed keys (Lava pays the provider; you pay Lava) or unmanaged (bring your own key — you supply the provider API key; Lava still meters usage and may charge a service fee). Both use the same forward token; set the optional provider_key in the token for unmanaged.

MCP flow

If your agent is using the Lava MCP server, do not reimplement this in code. The MCP handles auth and routing automatically:
  • login to authenticate (auto-provisions a spend key for gateway access)
  • prompt to send chat completions to any AI model — all models use OpenAI format regardless of provider
  • call to execute any API call through the gateway — auth is generated automatically
  • search to discover available providers and get ready-to-use request examples
  • get_provider_docs to fetch upstream API documentation when you need more detail
No manual forward token generation is needed. prompt uses an auto-provisioned spend key, and call generates forward tokens internally.

Provider fallbacks with prompt

The prompt tool accepts an optional fallbacks array. If the primary model returns a 5xx error, a 429, or a network failure, Lava retries each fallback in order until one succeeds:
{
  "tool": "prompt",
  "input": {
    "model": "claude-haiku-4-5",
    "messages": [{ "role": "user", "content": "Hello!" }],
    "fallbacks": [
      { "url": "api.openai.com/v1/chat/completions", "model": "gpt-4o-mini" },
      { "url": "api.anthropic.com/v1/messages", "model": "claude-3-haiku-20240307" }
    ]
  }
}
Each fallback entry is a { url, model } object — url is a scheme-less provider URL and model is the model name to use at that provider. Fallbacks can cross providers (e.g. Claude primary → GPT fallback). Lava translates the request body into each provider’s format automatically.

SDK flow

If you are writing application code yourself, use the examples below with @lavapayments/nodejs.

How It Works

Lava’s gateway sits between your code and AI providers. You send requests to Lava’s forward URL instead of the provider’s URL directly. Lava proxies the request, tracks usage, and returns the provider’s response unchanged. The request body stays identical to what the provider expects. You only change the base URL and auth header.

Make a Request

The simplest way to use the gateway is to pass your secret key directly in the Authorization header. Costs are charged to your merchant wallet — no token generation needed. Use the SDK’s pre-configured provider URLs, which point to Lava’s gateway with the correct upstream URL already set.
import { Lava } from '@lavapayments/nodejs';

const lava = new Lava(); // reads LAVA_SECRET_KEY from env
Find your secret key in Dashboard > Gateway > Secrets.
const response = await fetch(lava.providers.openai + '/chat/completions', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': `Bearer ${process.env.LAVA_SECRET_KEY}`
  },
  body: JSON.stringify({
    model: 'gpt-4o-mini',
    messages: [{ role: 'user', content: 'Hello from my agent!' }]
  })
});

const data = await response.json();
console.log(data.choices[0].message.content);
To bill customers instead of your own wallet, use forward tokens with generateForwardToken(). See Bill Your Customers for the full billing flow.

Available Providers

The SDK includes pre-configured URLs for 25+ providers:
lava.providers.openai          // OpenAI
lava.providers.anthropic       // Anthropic
lava.providers.google          // Google (native)
lava.providers.googleOpenaiCompatible // Google (OpenAI-compatible)
lava.providers.mistral         // Mistral
lava.providers.deepseek        // DeepSeek
lava.providers.xai             // xAI (Grok)
lava.providers.groq            // Groq
lava.providers.together        // Together AI
lava.providers.fireworks       // Fireworks
lava.providers.cerebras        // Cerebras
// ... and more
See the full list of supported providers for all available options.

Discover Available Models

List all models available through the gateway:
const models = await lava.models.list();

for (const model of models.data) {
  console.log(`${model.id} (${model.owned_by})`);
}

Check Your Usage

After making requests, query your usage data:
// Get usage for the last 7 days
const usage = await lava.usage.retrieve({
  start: new Date(Date.now() - 7 * 24 * 60 * 60 * 1000).toISOString().split('T')[0]
});

console.log('Total requests:', usage.totals.total_requests);
console.log('Total cost:', usage.totals.total_cost);

// List individual requests
const { data: requests } = await lava.requests.list({ limit: 5 });

for (const req of requests) {
  console.log(`${req.model} - ${req.model_usage.total_tokens} tokens - $${req.cost}`);
}

What’s Next?

Bill Your Customers

Add meters, plans, and checkout to charge for usage

Manage AI Spend

Create scoped API keys with spend and rate limits

Supported Providers

See all 25+ AI providers available through the gateway

Forward Proxy Details

Streaming, error handling, and advanced configuration