Fix: OpenAI 400 Error 'Reasoning Item Without Following Item' in OpenClaw
Getting '400 Item rs_... of type reasoning was provided without its required following item' from OpenAI? Here's the reasoning mode config fix.
Fix: OpenAI 400 Error "Reasoning Item Without Following Item" in OpenClaw
TL;DR: OpenAI's reasoning responses include internal "thinking" items that can't be sent back. Clear your session history and check your reasoning config.
The Error
400 Item 'rs_abc123def456' of type 'reasoning' was provided without its required following item of type 'output'
You might also see:
Error: 400 Bad Request — invalid message history: reasoning item must be followed by output
openai-responses: failed to send message — reasoning/output pair incomplete
Why This Happens
When you use OpenAI models with reasoning enabled (like gpt-5.2 in reasoning mode, or the o-series models), the API returns reasoning items — internal "thinking" blocks that show the model's chain-of-thought.
The problem is how these get stored in your conversation history. OpenAI's API has a strict rule: every reasoning item (rs_...) must be immediately followed by its corresponding output item. If OpenClaw stores the reasoning item in the session history but something goes wrong with the output (timeout, error, partial response), the next API call sends back an orphaned reasoning item — and OpenAI rejects the entire request with a 400 error.
This creates a frustrating loop: the broken history keeps getting sent, and every subsequent message fails with the same error.
Common triggers:
- Network timeout during a reasoning response — the reasoning part arrived but the output didn't
- Session restored from disk with corrupted or truncated history
- Switching between reasoning and non-reasoning modes mid-conversation
- Context compaction that removed the output but kept the reasoning item
How to Fix It
Step 1: Clear the current session
The fastest fix — nuke the corrupted conversation history:
openclaw session reset
Or if you're in a specific channel:
openclaw session reset --channel telegram
This clears the stored message history so the orphaned reasoning item is gone.
Step 2: Check your reasoning configuration
Make sure your OpenAI config handles reasoning responses correctly. In your clawdbot.json:
{
"providers": {
"openai": {
"apiKey": "sk-your-key-here",
"model": "gpt-5.2",
"reasoning": {
"enabled": true,
"effort": "medium",
"storeInHistory": false
}
}
}
}
The key setting is "storeInHistory": false — this tells OpenClaw to not include reasoning items in the conversation history sent back to the API. The model still reasons internally, but the thinking blocks are discarded from the stored context, preventing the orphaned-item problem.
Step 3: Switch to the Responses API (if you haven't)
OpenAI has two API modes: the older Chat Completions API and the newer Responses API. Reasoning works better with the Responses API:
{
"providers": {
"openai": {
"apiMode": "responses",
"model": "gpt-5.2"
}
}
}
The Responses API handles reasoning items natively and is less prone to the pairing issue.
Step 4: Restart the gateway
openclaw gateway restart
Send a test message. The 400 error should be gone.
If it keeps happening
If you're still getting the error after clearing the session, check for stale session files:
# Find session storage
find ~/.config/openclaw -name "*.session" -o -name "*.history" 2>/dev/null
# Or check the data directory
ls -la ~/.config/openclaw/data/sessions/
Delete any session files for the affected channel/user, then restart.
How to Prevent It
- Set
storeInHistory: falsefor reasoning — this is the single most effective prevention. You get the reasoning output, but don't risk corrupted history. - Don't switch models mid-session — if you switch between a reasoning model and a non-reasoning model, clear the session first. The history formats are different.
- Use the Responses API — it's designed for reasoning models and handles the reasoning/output pairing correctly.
- Monitor for timeouts — if your network is flaky and reasoning responses are getting cut off, the orphaned items will pile up. Fix your connectivity or increase your timeout settings.
The Easy Way
lobsterfarm is a managed hosting service for OpenClaw — deployment, updates, and support handled for you.
Skip the setup. Start using your AI assistant today.
lobsterfarm gives you a fully managed OpenClaw instance — one click, your own server, running 24/7.