- Added clarification that tools specified in excludeTools will be disabled for the entire conversation context
- Added note that excludeTools configuration affects all subsequent queries in the current session
This change improves documentation clarity for extension developers by better explaining the scope and impact of the excludeTools configuration.
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
The /chat list command was displaying raw ANSI escape codes instead of
colored text. This was caused by the escapeAnsiCtrlCodes function in
HistoryItemDisplay that escapes all ANSI control characters.
Changed to plain text format for better compatibility and cleaner output.
* fix: make token limits regex normalize e.g. `some-model-1.1` -> `some-model` while preserve e.g. `gpt-4.1` as-is.
* feat: update token limits regex for latest models `GLM-4.6`, `deepseek-v3.2-exp`.
* feat: add exact token limit number 202752 per the model config file for `GLM-4.6`.
* feat: Add Qwen3-VL-Plus token limits (256K input, 32K output)
- Added 256K input context window limit for Qwen3-VL-Plus model
- Updated output token limit from 8K to 32K for Qwen3-VL-Plus
- Added comprehensive tests for both input and output limits
As requested by Qwen maintainers for proper model support.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: enable high-res flag for qwen VL models
---------
Co-authored-by: Claude <noreply@anthropic.com>
- Added 256K input context window limit for Qwen3-VL-Plus model
- Updated output token limit from 8K to 32K for Qwen3-VL-Plus
- Added comprehensive tests for both input and output limits
As requested by Qwen maintainers for proper model support.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>