Tutorials / Tips & tricks / Running across all providers
🎥 Video ● Advanced

Running across all providers

Compare Claude, DeepSeek, GPT-4, and more side-by-side to find the best output for your task.

Why compare providers?

Each provider has different strengths. Claude excels at design; DeepSeek is blazingly fast; GPT-4 is balanced. For important work, testing all three (and comparing) lets you pick the best output and make informed cost/quality trade-offs.

Smart workflow: Test all three on important tasks. You'll usually find one that's obviously better for your specific use case. For quick experiments, just use DeepSeek (fast + cheap).

Step 1: Set up multiple providers

1

Add API keys for multiple providers

See Setting up API keys for detailed instructions. You'll need:

  • Claude (Anthropic API key from console.anthropic.com)
  • DeepSeek (API key from platform.deepseek.com)
  • OpenAI / GPT-4 (API key from platform.openai.com)

Or use LingModel (free, no key needed).

Step 2: Select multiple providers

2

Click multiple provider buttons

In the playground, you'll see provider buttons (Claude, DeepSeek, GPT-4, LingModel). Click to select multiple:

  • Click Claude to select it (it lights up)
  • While Claude is selected, click DeepSeek to add it
  • While both are selected, click GPT-4 to add it

You can select as many as you want. All will run in parallel when you click "Send".

Tip: For cost reasons, usually test just 2-3 providers. Claude + DeepSeek is a great pair (quality + speed) for quick decisions. Claude + DeepSeek + GPT-4 gives you the most comprehensive comparison.

Step 3: Write your prompt

3

Type a clear prompt

Use the same prompt you'd normally use. Make it as specific as possible. Example:

Build a landing page for a SaaS tool. Hero section, 3 feature cards, pricing table, testimonials, footer. Modern, minimal design. Single HTML file.

A clear prompt ensures all providers are solving the same problem, making comparison meaningful.

Step 4: Send to all providers

4

Click "Send" and watch them race

All selected providers run in parallel. You'll see:

  • Each provider's name appearing in the preview as it starts generating
  • Streaming code/HTML appearing in real-time
  • All providers finishing at different times (DeepSeek usually fastest, Claude more thorough)

Step 5: Compare and evaluate

5

Switch between outputs

Once all providers finish, click between them to view each result. For each, evaluate:

  • Visual quality: Does the design look polished?
  • Layout: Does it match what you asked for?
  • Completeness: Are all sections present?
  • Code quality: (Switch to Code tab) Is the HTML clean?

Comparison criteria

Criterion What to look for Winner often
Visual polish Smooth gradients, consistent spacing, professional Claude
Creativity Novel layouts, unexpected solutions Claude
Speed Time to generate (seconds) DeepSeek
Code quality Semantic HTML, clean CSS, no bloat Claude or GPT-4
Spec adherence Follows your prompt exactly GPT-4 or Claude
Cost Total API cost DeepSeek (10x cheaper)

Decision framework

Pick Claude if: Visual quality and creativity matter most. You're willing to pay for the best.

Pick DeepSeek if: Speed and cost matter most. You're doing bulk work or lots of experimentation.

Pick GPT-4 if: You want a balanced option. Good quality, good speed, reasonable cost.

Workflow patterns

Pattern 1: Triple test for important work

  1. Run prompt on Claude + DeepSeek + GPT-4
  2. Compare all three visually
  3. Pick the best one
  4. Download winner

Pattern 2: Quick experiment (save money)

  1. Run only on DeepSeek (fast, cheap)
  2. If happy, use it
  3. If not, re-prompt or try Claude

Pattern 3: Cost-quality balance

  1. Run on DeepSeek + Claude
  2. If DeepSeek is good enough, use it (save money)
  3. If Claude is better and worth the cost, use that

Sample cost analysis

One build, run on all three:

For an important design, that's well worth it to ensure quality.

Cost warning: Running on all providers means paying all three. For bulk work, run one at a time to save money.

Common scenarios

Pitch deck for investors: Run all three, pick Claude if best (highest quality matters)

Quick website mockup: Run DeepSeek only (fast, cheap, good enough)

Client deliverable: Run Claude + DeepSeek, compare, use Claude if noticeably better

Learning experiment: Use LingModel free tier first, then test DeepSeek if needed