Product

Choose Your AI: Relay Supports OpenAI, Claude, and Gemini

Relay lets you choose between OpenAI, Anthropic Claude, and Google Gemini for AI-powered email support. Here is why multi-LLM support matters and how to pick the right model for your team.

R

Relay Team

February 15, 20268 min read

When we designed Relay, we made a deliberate architectural decision: the AI model layer should be interchangeable. Your knowledge base, your workflow, your agent configuration, and your team's experience should not be tied to a specific AI provider. Today, Relay supports three leading AI providers, and switching between them takes a single configuration change.

This post explains why we built Relay with multi-LLM support, what each provider brings to the table, and how to think about choosing the right model for your support operation.

Why Multi-LLM Support Matters

The AI Landscape Is Moving Fast

Twelve months ago, the gap between AI providers was significant. Today, the top models from OpenAI, Anthropic, and Google are all capable of generating high-quality support responses when grounded in a good knowledge base. But each provider continues to improve at a different pace and in different dimensions. The model that is best for your use case today may not be the best choice six months from now.

Multi-LLM support means you can adapt without rebuilding your workflow. When a new model version launches with improved capabilities, you can try it out alongside your current model and switch if it performs better.

Vendor Independence

Building your entire support operation on a single AI provider's API creates a dependency that carries risk:

  • Pricing changes: AI providers are still figuring out their pricing models. Lock-in means you absorb whatever pricing changes your provider makes.
  • API stability: Outages happen. If your AI provider has downtime and you have no alternative, your AI support stops working.
  • Capability shifts: Providers sometimes deprecate models, change behaviors between versions, or adjust content policies in ways that affect your use case.
  • Terms of service changes: Data handling, privacy, and usage terms can change at a provider's discretion.

With multi-LLM support, you always have options. Relay abstracts the model layer so that a provider change is a configuration switch, not an engineering project.

Different Models, Different Strengths

While all three providers offer highly capable models, they are not identical. Each has characteristics that make it better suited for certain types of interactions.

The Three Providers

OpenAI (GPT Models)

OpenAI's GPT models are the most widely deployed AI models in production applications. In the support context, they offer:

Strengths:

  • Strong general-purpose response quality across a wide range of topics
  • Reliable instruction following for tone and format requirements
  • Excellent at structured responses (step-by-step instructions, numbered lists, tables)
  • Large ecosystem and extensive documentation
  • Consistent performance across diverse question types

Considerations:

  • Pricing varies by model tier; newer models may cost more
  • The most popular choice, so you may find your responses sound similar to other companies using the same model

Best for: Teams that want reliable, well-rounded performance across all types of support questions. OpenAI is a strong default choice for most support operations.

Anthropic (Claude Models)

Anthropic's Claude models have gained significant adoption in customer-facing applications, particularly those where nuanced communication matters.

Strengths:

  • Particularly strong at nuanced, empathetic communication
  • Excellent at following complex instructions and maintaining consistency
  • Strong performance on long-form responses and detailed explanations
  • Thoughtful handling of ambiguous or multi-part questions
  • Tends to be more cautious about generating information not in the knowledge base

Considerations:

  • Response generation can be slightly slower than some alternatives for very long responses
  • The cautious approach may sometimes produce overly qualified answers for straightforward questions

Best for: Teams where tone and empathy are critical, such as healthcare, financial services, or high-touch B2B support. Also a strong choice for technical documentation that requires detailed, precise explanations.

Google (Gemini Models)

Google's Gemini models have made significant progress in the support and enterprise space, with strong capabilities in information synthesis and multilingual support.

Strengths:

  • Strong multilingual capabilities for teams supporting international customers
  • Good balance of speed and quality
  • Effective at synthesizing information from multiple knowledge base sources
  • Competitive pricing
  • Strong integration with the broader Google ecosystem

Considerations:

  • The newest entrant of the three in the support tool space
  • May require more specific prompt configuration for optimal results

Best for: Teams with multilingual support needs, teams already invested in the Google ecosystem, or teams looking for a cost-effective alternative to OpenAI or Claude.

Get started with Relay

Connect your Gmail or Outlook, add your knowledge base, and start responding faster.

How to Choose the Right Model

Start with a Trial

The best way to choose is to test. Relay makes it easy to switch models, so you can run each provider for a few days and compare the results.

Here is a practical evaluation process:

  1. Collect 50 representative customer emails covering your most common question types
  2. Process them through each model using the same knowledge base and agent configuration
  3. Have your agents review the drafts from each model without knowing which model generated them
  4. Track which drafts require the least editing and which feel most natural for your brand

This blind comparison gives you a data-driven answer about which model works best for your specific content, customer base, and brand voice.

Factors to Consider

Response quality: Which model produces the most accurate, complete, and well-structured responses for your specific types of questions?

Tone alignment: Which model's natural communication style is closest to your brand voice? Some models are naturally more formal, while others are more conversational.

Edge case handling: How does each model handle questions that are partially covered in your knowledge base, or questions that are ambiguous? The model's behavior in these scenarios matters more than its performance on straightforward questions.

Speed: For some teams, the speed difference between models matters. If you plan to use auto-send mode extensively, faster generation means faster customer responses.

Cost: While Relay's pricing is based on your plan tier rather than per-token model costs, some plans include different levels of usage. Understanding the cost characteristics of each model helps you optimize.

How Model Switching Works in Relay

Changing your AI model in Relay is a configuration change at the agent level. Here is what happens when you switch:

  1. Navigate to your mailbox's agent configuration
  2. Select a different AI provider from the model dropdown
  3. Save the configuration

That is it. From that point forward, new incoming emails are processed by the new model. Your knowledge base, tone instructions, response guidelines, and all other configuration stay exactly the same. The change takes effect immediately for new conversations.

What Does Not Change

  • Your knowledge base content and organization
  • Your agent's tone and style configuration
  • Your workflow (approval mode vs. auto-send)
  • Your team structure and assignments
  • Your conversation history and analytics
  • Your email provider connections

What Does Change

  • The AI model generating new drafts
  • Potentially the style and structure of generated responses (each model has its own writing characteristics)
  • Response generation speed (varies by model)

Advanced: Using Different Models for Different Mailboxes

Because each mailbox in Relay has its own AI agent, you can use different models for different mailboxes. This lets you match the model to the specific needs of each support channel:

  • Technical support mailbox: Use a model that excels at detailed, precise explanations
  • Customer success mailbox: Use a model that is strongest at empathetic, relationship-oriented communication
  • Sales inquiry mailbox: Use a model that is good at concise, action-oriented responses

This is not a common configuration for most teams, but it is available for teams that want to optimize at this level of granularity.

Our Approach Going Forward

We are committed to supporting the leading AI providers and adding new options as the market evolves. Our goal is to ensure that you always have access to the best available AI models without being locked into any single provider.

As new model versions are released, we evaluate them for support use cases and add support when they meet our quality standards. We also monitor the performance and reliability of each provider and will communicate proactively if we identify issues that could affect your support quality.

Getting Started with Multi-LLM

If you are already using Relay, you can try a different AI provider right now. Go to your agent configuration, select a new model, and see how it performs with your knowledge base and customer questions. There is no migration, no data loss, and you can switch back at any time.

If you are new to Relay, you will choose your initial AI provider during setup. We recommend starting with whichever provider you are most curious about, knowing that you can change at any time. The setup process is the same regardless of which model you choose.

Multi-LLM support is about giving you control and flexibility. The AI model market will continue to evolve, and your support tool should evolve with it. With Relay, it does.

R

Relay Team

Product & Engineering

Share

Get started with Relay

Connect your Gmail or Outlook, add your knowledge base, and start responding faster.

Continue Reading