Choose the Best AI Model for Every Mailbox

Switch between OpenAI, Claude, and Gemini to find the perfect fit for each use case.

The Multi-LLM Support Challenge

No single AI model is best for everything. Multi-LLM support lets you optimize for quality, speed, or cost per mailbox.

3

LLM providers supported

6+

AI models available

Per-agent

Model configuration

How It Works

1

Connect Your Inbox

Link your Gmail or Outlook account in minutes.

2

Add Knowledge Base

Upload your FAQ, docs, and policies.

3

AI Drafts Responses

Relay reads incoming emails and drafts replies from your knowledge base.

4

Review & Send

Your team reviews AI drafts and sends with one click.

Example Email Thread

Incoming Email

We have different needs for different departments — technical support needs the best quality, while our FAQ mailbox needs speed. Can we use different AI models?

AI-Generated Draft

High confidenceMulti-LLM Configuration Guide

Absolutely! This is exactly what our multi-LLM support is designed for. Here's how to set it up: **Per-Agent Model Selection:** Each mailbox/agent can use a different AI model: - **Technical Support Mailbox** → Claude Opus or GPT-5 - Best for complex, nuanced technical responses - Highest accuracy and reasoning capability - **FAQ Mailbox** → Gemini Flash or GPT-5 Nano - Fastest response generation - Most cost-effective for routine questions - **Customer Success Mailbox** → Claude Sonnet or GPT-5 Mini - Great balance of quality, tone, and speed - Excellent at empathetic, personalized responses **How to Configure:** 1. Go to Settings > Agents 2. Select the agent/mailbox you want to configure 3. Under 'AI Model', choose the provider and model 4. Save — changes take effect immediately **Flexibility:** - Switch models anytime without data loss - A/B test models to compare performance - Use your own API keys for any provider Want help choosing the right model for each department?

Benefits

Model Flexibility

Choose from OpenAI, Claude, and Gemini per mailbox.

Optimize Per Use Case

Technical support gets accuracy, FAQ gets speed.

No Lock-In

Switch models anytime without migration or data loss.

Cost Optimization

Use premium models where needed and cost-effective ones where not.

Popular Industries

SaaSEnterpriseTechnologyMid-Market

Frequently Asked Questions

What LLM providers does Relay support?

OpenAI (GPT-5 family), Anthropic (Claude family), and Google (Gemini family). New providers are added regularly.

Can I A/B test different models?

Yes — run two agents on the same mailbox with different models to compare response quality and speed.

Do I need separate API keys for each provider?

You can use Relay's included AI credits for all providers, or bring your own API keys for specific providers.

Automate Multi-LLM Support with AI

Start your free trial and see how Relay handles multi-llm support with AI-drafted responses.