RRoutify

Integrations · Aider

Aider with Chinese model prices.

Aider is the original AI pair-programmer in your terminal. Point it at Routify with one env var and you get DeepSeek V3.2, Kimi K2.5, Qwen3 Coder — all on the same --model flag pattern Aider already knows.

Setup in 30 seconds

Aider uses an OpenAI-compatible base URL. Override OPENAI_API_BASE and you’re done — no plugin, no extension, no proxy.

💡 Aider's --weak-model handles repo-map summarization & commit messages — costs nothing if you point it at deepseek-chat. Your strong --model only burns dollars on the actual edits.

~/.aiderrc
# Install if you haven't
pip install aider-chat

# Point at Routify
export OPENAI_API_BASE=https://routify.bytedance.city/v1
export OPENAI_API_KEY=rtf_xxx_your_routify_key

# Then use any Routify model id:
aider --model openai/deepseek-chat        # cheapest
aider --model openai/qwen3-coder          # best for repo refactors
aider --model openai/kimi-k2              # long-context
aider --model openai/claude-opus-4-7      # frontier, pass-through
aider --model openai/gpt-5.5              # OpenAI flagship

# Or use weak/strong split:
aider --model openai/claude-opus-4-7 --weak-model openai/deepseek-chat

Aider uses LiteLLM under the hood for routing — Routify exposes models via the openai/* prefix because that's the OpenAI-compatible path.

Profiles

Picked for Aider.

Three battle-tested model combos. Pick the profile that matches your priority and copy the model id straight in.

Cheap

DeepSeek for both strong and weak model slots. Aider runs about 4× faster than on Claude (no context-window negotiation), and your invoice barely moves.

Cost note

$0.05 to $0.30 per coding session, depending on diff size.

Primary model

Fallback chain

Smart router auto-fails over to the next id if the primary is over budget or down.

FAQ

Why the openai/ prefix?

Aider uses LiteLLM, which prefixes provider routes. Since Routify is OpenAI-compatible, all model ids go under openai/. Aider sees them as OpenAI models; Routify routes to the actual upstream.

Does --watch-files work?

Yes. The watch loop is local; only the model call goes through Routify.

What about Aider's repo-map?

Repo-map runs on whatever you set as --weak-model. Pin a cheap one (deepseek-chat) so you don't pay Opus rates to summarise function names.

Token counting?

Aider uses LiteLLM's token counter, which sometimes mis-estimates for non-OpenAI models. Cross-check in Routify dashboard for the truth.

Voice / image input?

Aider voice → Whisper goes to whoever you pointed transcription at (out of Routify scope). Aider image input requires a vision model — gpt-4o or doubao-1.6-pro work, claude-opus-4-7 too.

Ship with Aider in 30 seconds.

$5 free credit. No credit card required.