Fix: 'Unknown Model' Error with Qwen or Custom Providers in OpenClaw
Getting 'Unknown model' errors when using Qwen, DeepSeek, or other custom providers in OpenClaw? Your config looks right but doesn't work. Here's why.
Fix: "Unknown Model" Error with Qwen or Custom Providers in OpenClaw
TL;DR: OpenClaw validates model names against a built-in list. Custom provider models need explicit type declarations to bypass this.
The Error
Error: Unknown model "qwen2.5-72b-instruct"
You might also see:
Error: Unknown model "deepseek-r1" — did you mean one of: claude-3-5-sonnet, gpt-5.2...
Error: Model "yi-lightning" is not recognized. Check your provider configuration.
Error: Cannot determine provider for model "glm-4-plus"
The frustrating part: your config looks completely correct. The API key works. You can curl the provider's API directly and get responses. But OpenClaw refuses to use the model.
Why This Happens
OpenClaw maintains an internal registry of known models and their providers. When you specify a model, OpenClaw tries to:
- Look up the model name in its registry
- Determine which provider handles it
- Route the request to the right API
If the model isn't in the registry, OpenClaw rejects it — even if you've configured a custom provider that would handle it perfectly. This is a common pain point with newer or lesser-known providers like Qwen (Alibaba), DeepSeek, Yi, Zhipu, Moonshot, and others.
The issue is especially confusing because:
- Your custom provider config is correct
- The base URL is right
- The API key works
- But OpenClaw never even tries to call the API — it fails at the model validation step
How to Fix It
Step 1: Declare the custom provider with explicit model mapping
Instead of just specifying a model name, tell OpenClaw exactly which provider handles which models:
{
"providers": {
"qwen": {
"type": "openai-compatible",
"baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"apiKey": "sk-your-qwen-key",
"models": {
"qwen2.5-72b-instruct": {
"maxTokens": 8192,
"contextWindow": 131072
},
"qwen-turbo-latest": {
"maxTokens": 8192,
"contextWindow": 131072
}
}
}
},
"ai": {
"provider": "qwen",
"model": "qwen2.5-72b-instruct"
}
}
The "models" block explicitly registers these model names so OpenClaw's validator knows they're legitimate.
Step 2: Use the right provider type
Different providers need different type values:
| Provider | Type | Base URL |
|---|---|---|
| Qwen (Alibaba) | openai-compatible |
https://dashscope.aliyuncs.com/compatible-mode/v1 |
| DeepSeek | openai-compatible |
https://api.deepseek.com/v1 |
| Together AI | openai-compatible |
https://api.together.xyz/v1 |
| Groq | openai-compatible |
https://api.groq.com/openai/v1 |
| Ollama (local) | ollama |
http://localhost:11434 |
| OpenRouter | openai-compatible |
https://openrouter.ai/api/v1 |
| Mistral | openai-compatible |
https://api.mistral.ai/v1 |
Most providers that offer an "OpenAI-compatible" endpoint work with type: "openai-compatible".
Step 3: Configure image models separately (if needed)
A common gotcha: you set up a custom provider for chat, but OpenClaw also tries to use it for image analysis — and the image model name is different or not supported.
{
"providers": {
"qwen": {
"type": "openai-compatible",
"baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"apiKey": "sk-your-key",
"models": {
"qwen2.5-72b-instruct": {
"maxTokens": 8192,
"contextWindow": 131072,
"supportsVision": false
},
"qwen-vl-max": {
"maxTokens": 4096,
"contextWindow": 32768,
"supportsVision": true
}
}
}
},
"ai": {
"provider": "qwen",
"model": "qwen2.5-72b-instruct",
"imageModel": "qwen-vl-max"
}
}
If you don't have a vision model, explicitly disable image processing:
{
"ai": {
"provider": "qwen",
"model": "qwen2.5-72b-instruct",
"imageModel": null
}
}
Step 4: Verify it works
openclaw gateway restart
openclaw health
# Test the model directly
openclaw chat --provider qwen "Hello, what model are you?"
If it still doesn't work
Check these common gotchas:
# 1. Verify the API key works with a direct curl
curl -s https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer sk-your-key" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen2.5-72b-instruct",
"messages": [{"role": "user", "content": "hi"}]
}'
# 2. Check for typos in model name — some providers are case-sensitive
# "Qwen2.5-72B-Instruct" ≠ "qwen2.5-72b-instruct"
# 3. Check OpenClaw logs for the actual error
openclaw logs --tail 50 | grep -i "model\|provider\|error"
How to Prevent It
- Always declare models explicitly in your custom provider config. Don't rely on OpenClaw auto-detecting them.
- Use
type: "openai-compatible"for any provider that offers an OpenAI-compatible API. This is the most flexible option. - Test with curl first. Before configuring OpenClaw, verify that your API key and model name work with a raw API call. This eliminates OpenClaw from the debugging equation.
- Watch for model name changes. Providers rename and deprecate models. If it worked last week and broke today, check the provider's changelog.
The Easy Way
lobsterfarm is a managed hosting service for OpenClaw — deployment, updates, and support handled for you.
Skip the setup. Start using your AI assistant today.
lobsterfarm gives you a fully managed OpenClaw instance — one click, your own server, running 24/7.