How ModelTriage works

Every prompt is different. ModelTriage automatically picks the right LLM for each task — so you get the best answer without guessing which model to use.

You describe the task

Type your prompt and optionally attach files — code screenshots, logs, JSON, or any text file. ModelTriage analyzes the content to understand what you need.

We classify and route

A deterministic classifier identifies the task type, complexity, and stakes. A scoring engine evaluates each model's fit based on a capability matrix, then selects the best one.

You get the best answer

The response streams in real time with a clear explanation of why that model was chosen. Or use comparison mode to run 2-3 models in parallel and see a structured diff.

Supported models

ModelTriage routes across leading providers. Each model is scored on coding, writing, analysis, speed, and vision capabilities.

GPT-5 MiniOpenAI

Quick answers, lightweight tasks, low cost

GPT-5.2OpenAI

Deep reasoning, complex multi-step problems

Claude Haiku 4.5Anthropic

Fastest Anthropic model, ideal for simple tasks

Claude Sonnet 4.5Anthropic

Strong all-rounder, good balance of speed and depth

Claude Opus 4.6Anthropic

Highest capability, nuanced analysis, long context

Gemini 3 FlashGoogle

Low latency, strong at summarization and extraction

Gemini 3 ProGoogle

Multimodal strength, large context window

Privacy by design

ModelTriage is built with privacy as a core principle, not an afterthought.

Prompts are never stored

Your prompts and model responses are streamed directly to your browser. We only store a SHA-256 hash for routing analytics — it cannot be reversed.

No third-party tracking

No analytics scripts, no ad trackers, no data brokers. Usage data stays in our database and is only used to enforce limits and improve routing.

Delete anytime

Delete your account and all associated data from your account settings. Removal is immediate and irreversible — we don't keep backups of your data.