Compare commits

..

17 Commits

Author SHA1 Message Date
mingholy.lmh
b9a3a60418 fix: lint issues 2025-12-18 18:30:55 +08:00
mingholy.lmh
8928fc1534 feat: add modelProviders in settings to support custom model switching 2025-12-18 18:30:09 +08:00
Alexander Farber
8106a6b0f4 Handle PAT tokens and credentials in git remote URL parsing (#1225) 2025-12-18 00:44:46 +08:00
pomelo
c0839dceac Merge pull request #1266 from QwenLM/docs-fix
docs:Fix the errors in the document
2025-12-17 22:04:27 +08:00
joeytoday
f9a1ee2442 docs: updated vscode showcase video 2025-12-17 16:47:37 +08:00
joeytoday
f824004f99 docs: updated links in index.md 2025-12-17 15:03:23 +08:00
Mingholy
e274b4469a Merge pull request #1214 from kfxmvp/fix/issue-1186-schema-converter
fix: add configurable OpenAPI 3.0 schema compliance for Gemini compatibility (#1186)
2025-12-17 11:12:57 +08:00
joeytoday
a4e3d764d3 docs: updated all links, click and open in vscode, new showcase video in overview 2025-12-17 11:10:31 +08:00
tanzhenxin
0a39c91264 Merge pull request #1275 from QwenLM/fix/integration-test
remove one flaky integration test
2025-12-17 10:06:28 +08:00
joeytoday
d1a6b3207e docs: updated inline links 2025-12-16 17:01:47 +08:00
pomelo-nwu
1c62499977 feat: fix link 2025-12-16 15:40:01 +08:00
pomelo-nwu
4b8b4e2fe8 feat: update docs 2025-12-16 15:32:21 +08:00
pomelo-nwu
36fb6b8291 feat: update docs 2025-12-16 13:48:10 +08:00
kefuxin
573c33f68a Merge remote-tracking branch 'upstream/main' into fix/issue-1186-schema-converter 2025-12-16 11:08:51 +08:00
kefuxin
44794121a8 docs: update MCP server schema compliance documentation
Update documentation to reflect the new `schemaCompliance` setting and detailed OpenAPI 3.0 transformation rules.

Suggested-by: afarber
2025-12-12 10:38:00 +08:00
kefuxin
84cccfe99a feat: add i18n for schemaCompliance setting 2025-12-11 14:30:38 +08:00
kefuxin
b6a3ab11e0 fix: improve Gemini compatibility by adding configurable schema converter
This commit addresses issue #1186 by introducing a configurable schema compliance
mechanism for tool definitions sent to LLMs.

Key changes:
1.  **New Configuration**: Added `model.generationConfig.schemaCompliance` setting (defaults to 'auto', optional 'openapi_30').
2.  **Schema Converter**: Implemented `toOpenAPI30` converter in `packages/core` to strictly downgrade modern JSON Schema to OpenAPI 3.0.3 (required for Gemini API), handling:
    -   Nullable types (`["string", "null"]` -> `nullable: true`)
    -   Numeric exclusive limits
    -   Const to Enum conversion
    -   Removal of tuples and invalid keywords (``, `dependencies`, etc.)
3.  **Tests**: Added comprehensive unit tests for the schema converter and updated pipeline tests.

Fixes #1186
2025-12-11 14:23:27 +08:00
49 changed files with 2339 additions and 188 deletions

View File

@@ -627,7 +627,12 @@ The MCP integration tracks several states:
### Schema Compatibility
- **Property stripping:** The system automatically removes certain schema properties (`$schema`, `additionalProperties`) for Qwen API compatibility
- **Schema compliance mode:** By default (`schemaCompliance: "auto"`), tool schemas are passed through as-is. Set `"model": { "generationConfig": { "schemaCompliance": "openapi_30" } }` in your `settings.json` to convert models to Strict OpenAPI 3.0 format.
- **OpenAPI 3.0 transformations:** When `openapi_30` mode is enabled, the system handles:
- Nullable types: `["string", "null"]` -> `type: "string", nullable: true`
- Const values: `const: "foo"` -> `enum: ["foo"]`
- Exclusive limits: numeric `exclusiveMinimum` -> boolean form with `minimum`
- Keyword removal: `$schema`, `$id`, `dependencies`, `patternProperties`
- **Name sanitization:** Tool names are automatically sanitized to meet API requirements
- **Conflict resolution:** Tool name conflicts between servers are resolved through automatic prefixing

View File

@@ -14,7 +14,7 @@ Learn how to use Qwen Code as an end user. This section covers:
- Configuration options
- Troubleshooting
### [Developer Guide](./developers/contributing)
### [Developer Guide](./developers/architecture)
Learn how to contribute to and develop Qwen Code. This section covers:

View File

@@ -189,8 +189,8 @@ Then select "create" and follow the prompts to define:
> - Create project-specific subagents in `.qwen/agents/` for team sharing
> - Use descriptive `description` fields to enable automatic delegation
> - Limit tool access to what each subagent actually needs
> - Know more about [Sub Agents](/users/features/sub-agents)
> - Know more about [Approval Mode](/users/features/approval-mode)
> - Know more about [Sub Agents](./features/sub-agents)
> - Know more about [Approval Mode](./features/approval-mode)
## Work with tests
@@ -318,7 +318,7 @@ This provides a directory listing with file information.
Show me the data from @github: repos/owner/repo/issues
```
This fetches data from connected MCP servers using the format @server: resource. See [MCP](/users/features/mcp) for details.
This fetches data from connected MCP servers using the format @server: resource. See [MCP](./features/mcp) for details.
> [!tip]
>

View File

@@ -6,7 +6,7 @@ Qwen Code includes the ability to automatically ignore files, similar to `.gitig
## How it works
When you add a path to your `.qwenignore` file, tools that respect this file will exclude matching files and directories from their operations. For example, when you use the [`read_many_files`](/developers/tools/multi-file) command, any paths in your `.qwenignore` file will be automatically excluded.
When you add a path to your `.qwenignore` file, tools that respect this file will exclude matching files and directories from their operations. For example, when you use the [`read_many_files`](../../developers/tools/multi-file) command, any paths in your `.qwenignore` file will be automatically excluded.
For the most part, `.qwenignore` follows the conventions of `.gitignore` files:

View File

@@ -2,7 +2,7 @@
> [!tip]
>
> **Authentication / API keys:** Authentication (Qwen OAuth vs OpenAI-compatible API) and auth-related environment variables (like `OPENAI_API_KEY`) are documented in **[Authentication](/users/configuration/auth)**.
> **Authentication / API keys:** Authentication (Qwen OAuth vs OpenAI-compatible API) and auth-related environment variables (like `OPENAI_API_KEY`) are documented in **[Authentication](../configuration/auth)**.
> [!note]
>
@@ -42,7 +42,7 @@ Qwen Code uses JSON settings files for persistent configuration. There are four
In addition to a project settings file, a project's `.qwen` directory can contain other project-specific files related to Qwen Code's operation, such as:
- [Custom sandbox profiles](/users/features/sandbox) (e.g. `.qwen/sandbox-macos-custom.sb`, `.qwen/sandbox.Dockerfile`).
- [Custom sandbox profiles](../features/sandbox) (e.g. `.qwen/sandbox-macos-custom.sb`, `.qwen/sandbox.Dockerfile`).
### Available settings in `settings.json`
@@ -69,7 +69,7 @@ Settings are organized into categories. All settings should be placed within the
| Setting | Type | Description | Default |
| ---------------------------------------- | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| `ui.theme` | string | The color theme for the UI. See [Themes](/users/configuration/themes) for available options. | `undefined` |
| `ui.theme` | string | The color theme for the UI. See [Themes](../configuration/themes) for available options. | `undefined` |
| `ui.customThemes` | object | Custom theme definitions. | `{}` |
| `ui.hideWindowTitle` | boolean | Hide the window title bar. | `false` |
| `ui.hideTips` | boolean | Hide helpful tips in the UI. | `false` |
@@ -135,6 +135,69 @@ Settings are organized into categories. All settings should be placed within the
- `"./custom-logs"` - Logs to `./custom-logs` relative to current directory
- `"/tmp/openai-logs"` - Logs to absolute path `/tmp/openai-logs`
#### `modelProviders`
The `modelProviders` configuration allows you to define multiple models for a specific authentication type. Currently we support only `openai` authentication type.
| Field | Type | Required | Description | Default |
| -------------------------------------- | ------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ |
| `id` | string | Yes | Unique identifier for the model within the authentication type. | - |
| `name` | string | No | Display name for the model. | Same as `id` |
| `description` | string | No | A brief description of the model. | `undefined` |
| `envKey` | string | No | The name of the environment variable containing the API key for this model. For example, if set to `"OPENAI_API_KEY"`, the system will read the API key from `process.env.OPENAI_API_KEY`. This keeps API keys secure in environment variables. | `undefined` |
| `baseUrl` | string | No | Custom API endpoint URL. If not specified, uses the default URL for the authentication type. | `undefined` |
| `capabilities.vision` | boolean | No | Whether the model supports vision/image inputs. | `false` |
| `generationConfig.temperature` | number | No | Sampling temperature. Refer to your providers' document. | `undefined` |
| `generationConfig.top_p` | number | No | Nucleus sampling parameter. Refer to your providers' document. | `undefined` |
| `generationConfig.top_k` | number | No | Top-k sampling parameter. Refer to your providers' document. | `undefined` |
| `generationConfig.max_tokens` | number | No | Maximum output tokens. | `undefined` |
| `generationConfig.timeout` | number | No | Request timeout in milliseconds. | `undefined` |
| `generationConfig.maxRetries` | number | No | Maximum retry attempts. | `undefined` |
| `generationConfig.disableCacheControl` | boolean | No | Disable cache control for DashScope providers. | `false` |
**Example Configuration:**
```json
{
"modelProviders": {
"openai": [
{
"id": "gpt-4-turbo",
"name": "GPT-4 Turbo",
"description": "Most capable GPT-4 model",
"envKey": "OPENAI_API_KEY",
"baseUrl": "https://api.openai.com/v1",
"capabilities": {
"vision": true
},
"generationConfig": {
"temperature": 0.7,
"max_tokens": 4096
}
},
{
"id": "deepseek-coder",
"name": "DeepSeek Coder",
"description": "DeepSeek coding model",
"envKey": "DEEPSEEK_API_KEY",
"baseUrl": "https://api.deepseek.com/v1",
"generationConfig": {
"temperature": 0.5,
"max_tokens": 8192
}
}
]
}
}
```
**Security Note:** API keys should never be stored directly in configuration files. Always use the `envKey` field to reference environment variables where your API keys are stored. Set these environment variables in your shell profile or `.env` files:
```bash
export OPENAI_API_KEY="your-api-key-here"
export DEEPSEEK_API_KEY="your-deepseek-key-here"
```
#### context
| Setting | Type | Description | Default |
@@ -326,7 +389,7 @@ The CLI keeps a history of shell commands you run. To avoid conflicts between di
Environment variables are a common way to configure applications, especially for sensitive information (like tokens) or for settings that might change between environments.
Qwen Code can automatically load environment variables from `.env` files.
For authentication-related variables (like `OPENAI_*`) and the recommended `.qwen/.env` approach, see **[Authentication](/users/configuration/auth)**.
For authentication-related variables (like `OPENAI_*`) and the recommended `.qwen/.env` approach, see **[Authentication](../configuration/auth)**.
> [!tip]
>
@@ -357,38 +420,38 @@ Arguments passed directly when running the CLI can override other configurations
### Command-Line Arguments Table
| Argument | Alias | Description | Possible Values | Notes |
| ---------------------------- | ----- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--model` | `-m` | Specifies the Qwen model to use for this session. | Model name | Example: `npm start -- --model qwen3-coder-plus` |
| `--prompt` | `-p` | Used to pass a prompt directly to the command. This invokes Qwen Code in a non-interactive mode. | Your prompt text | For scripting examples, use the `--output-format json` flag to get structured output. |
| `--prompt-interactive` | `-i` | Starts an interactive session with the provided prompt as the initial input. | Your prompt text | The prompt is processed within the interactive session, not before it. Cannot be used when piping input from stdin. Example: `qwen -i "explain this code"` |
| `--output-format` | `-o` | Specifies the format of the CLI output for non-interactive mode. | `text`, `json`, `stream-json` | `text`: (Default) The standard human-readable output. `json`: A machine-readable JSON output emitted at the end of execution. `stream-json`: Streaming JSON messages emitted as they occur during execution. For structured output and scripting, use the `--output-format json` or `--output-format stream-json` flag. See [Headless Mode](/users/features/headless) for detailed information. |
| `--input-format` | | Specifies the format consumed from standard input. | `text`, `stream-json` | `text`: (Default) Standard text input from stdin or command-line arguments. `stream-json`: JSON message protocol via stdin for bidirectional communication. Requirement: `--input-format stream-json` requires `--output-format stream-json` to be set. When using `stream-json`, stdin is reserved for protocol messages. See [Headless Mode](/users/features/headless) for detailed information. |
| `--include-partial-messages` | | Include partial assistant messages when using `stream-json` output format. When enabled, emits stream events (message_start, content_block_delta, etc.) as they occur during streaming. | | Default: `false`. Requirement: Requires `--output-format stream-json` to be set. See [Headless Mode](/users/features/headless) for detailed information about stream events. |
| `--sandbox` | `-s` | Enables sandbox mode for this session. | | |
| `--sandbox-image` | | Sets the sandbox image URI. | | |
| `--debug` | `-d` | Enables debug mode for this session, providing more verbose output. | | |
| `--all-files` | `-a` | If set, recursively includes all files within the current directory as context for the prompt. | | |
| `--help` | `-h` | Displays help information about command-line arguments. | | |
| `--show-memory-usage` | | Displays the current memory usage. | | |
| `--yolo` | | Enables YOLO mode, which automatically approves all tool calls. | | |
| `--approval-mode` | | Sets the approval mode for tool calls. | `plan`, `default`, `auto-edit`, `yolo` | Supported modes: `plan`: Analyze only—do not modify files or execute commands. `default`: Require approval for file edits or shell commands (default behavior). `auto-edit`: Automatically approve edit tools (edit, write_file) while prompting for others. `yolo`: Automatically approve all tool calls (equivalent to `--yolo`). Cannot be used together with `--yolo`. Use `--approval-mode=yolo` instead of `--yolo` for the new unified approach. Example: `qwen --approval-mode auto-edit`<br>See more about [Approval Mode](/users/features/approval-mode). |
| `--allowed-tools` | | A comma-separated list of tool names that will bypass the confirmation dialog. | Tool names | Example: `qwen --allowed-tools "Shell(git status)"` |
| `--telemetry` | | Enables [telemetry](/developers/development/telemetry). | | |
| `--telemetry-target` | | Sets the telemetry target. | | See [telemetry](/developers/development/telemetry) for more information. |
| `--telemetry-otlp-endpoint` | | Sets the OTLP endpoint for telemetry. | | See [telemetry](/developers/development/telemetry) for more information. |
| `--telemetry-otlp-protocol` | | Sets the OTLP protocol for telemetry (`grpc` or `http`). | | Defaults to `grpc`. See [telemetry](/developers/development/telemetry) for more information. |
| `--telemetry-log-prompts` | | Enables logging of prompts for telemetry. | | See [telemetry](/developers/development/telemetry) for more information. |
| `--checkpointing` | | Enables [checkpointing](/users/features/checkpointing). | | |
| `--extensions` | `-e` | Specifies a list of extensions to use for the session. | Extension names | If not provided, all available extensions are used. Use the special term `qwen -e none` to disable all extensions. Example: `qwen -e my-extension -e my-other-extension` |
| `--list-extensions` | `-l` | Lists all available extensions and exits. | | |
| `--proxy` | | Sets the proxy for the CLI. | Proxy URL | Example: `--proxy http://localhost:7890`. |
| `--include-directories` | | Includes additional directories in the workspace for multi-directory support. | Directory paths | Can be specified multiple times or as comma-separated values. 5 directories can be added at maximum. Example: `--include-directories /path/to/project1,/path/to/project2` or `--include-directories /path/to/project1 --include-directories /path/to/project2` |
| `--screen-reader` | | Enables screen reader mode, which adjusts the TUI for better compatibility with screen readers. | | |
| `--version` | | Displays the version of the CLI. | | |
| `--openai-logging` | | Enables logging of OpenAI API calls for debugging and analysis. | | This flag overrides the `enableOpenAILogging` setting in `settings.json`. |
| `--openai-logging-dir` | | Sets a custom directory path for OpenAI API logs. | Directory path | This flag overrides the `openAILoggingDir` setting in `settings.json`. Supports absolute paths, relative paths, and `~` expansion. Example: `qwen --openai-logging-dir "~/qwen-logs" --openai-logging` |
| `--tavily-api-key` | | Sets the Tavily API key for web search functionality for this session. | API key | Example: `qwen --tavily-api-key tvly-your-api-key-here` |
| Argument | Alias | Description | Possible Values | Notes |
| ---------------------------- | ----- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--model` | `-m` | Specifies the Qwen model to use for this session. | Model name | Example: `npm start -- --model qwen3-coder-plus` |
| `--prompt` | `-p` | Used to pass a prompt directly to the command. This invokes Qwen Code in a non-interactive mode. | Your prompt text | For scripting examples, use the `--output-format json` flag to get structured output. |
| `--prompt-interactive` | `-i` | Starts an interactive session with the provided prompt as the initial input. | Your prompt text | The prompt is processed within the interactive session, not before it. Cannot be used when piping input from stdin. Example: `qwen -i "explain this code"` |
| `--output-format` | `-o` | Specifies the format of the CLI output for non-interactive mode. | `text`, `json`, `stream-json` | `text`: (Default) The standard human-readable output. `json`: A machine-readable JSON output emitted at the end of execution. `stream-json`: Streaming JSON messages emitted as they occur during execution. For structured output and scripting, use the `--output-format json` or `--output-format stream-json` flag. See [Headless Mode](../features/headless) for detailed information. |
| `--input-format` | | Specifies the format consumed from standard input. | `text`, `stream-json` | `text`: (Default) Standard text input from stdin or command-line arguments. `stream-json`: JSON message protocol via stdin for bidirectional communication. Requirement: `--input-format stream-json` requires `--output-format stream-json` to be set. When using `stream-json`, stdin is reserved for protocol messages. See [Headless Mode](../features/headless) for detailed information. |
| `--include-partial-messages` | | Include partial assistant messages when using `stream-json` output format. When enabled, emits stream events (message_start, content_block_delta, etc.) as they occur during streaming. | | Default: `false`. Requirement: Requires `--output-format stream-json` to be set. See [Headless Mode](../features/headless) for detailed information about stream events. |
| `--sandbox` | `-s` | Enables sandbox mode for this session. | | |
| `--sandbox-image` | | Sets the sandbox image URI. | | |
| `--debug` | `-d` | Enables debug mode for this session, providing more verbose output. | | |
| `--all-files` | `-a` | If set, recursively includes all files within the current directory as context for the prompt. | | |
| `--help` | `-h` | Displays help information about command-line arguments. | | |
| `--show-memory-usage` | | Displays the current memory usage. | | |
| `--yolo` | | Enables YOLO mode, which automatically approves all tool calls. | | |
| `--approval-mode` | | Sets the approval mode for tool calls. | `plan`, `default`, `auto-edit`, `yolo` | Supported modes: `plan`: Analyze only—do not modify files or execute commands. `default`: Require approval for file edits or shell commands (default behavior). `auto-edit`: Automatically approve edit tools (edit, write_file) while prompting for others. `yolo`: Automatically approve all tool calls (equivalent to `--yolo`). Cannot be used together with `--yolo`. Use `--approval-mode=yolo` instead of `--yolo` for the new unified approach. Example: `qwen --approval-mode auto-edit`<br>See more about [Approval Mode](../features/approval-mode). |
| `--allowed-tools` | | A comma-separated list of tool names that will bypass the confirmation dialog. | Tool names | Example: `qwen --allowed-tools "Shell(git status)"` |
| `--telemetry` | | Enables [telemetry](/developers/development/telemetry). | | |
| `--telemetry-target` | | Sets the telemetry target. | | See [telemetry](/developers/development/telemetry) for more information. |
| `--telemetry-otlp-endpoint` | | Sets the OTLP endpoint for telemetry. | | See [telemetry](../../developers/development/telemetry) for more information. |
| `--telemetry-otlp-protocol` | | Sets the OTLP protocol for telemetry (`grpc` or `http`). | | Defaults to `grpc`. See [telemetry](../../developers/development/telemetry) for more information. |
| `--telemetry-log-prompts` | | Enables logging of prompts for telemetry. | | See [telemetry](../../developers/development/telemetry) for more information. |
| `--checkpointing` | | Enables [checkpointing](../features/checkpointing). | | |
| `--extensions` | `-e` | Specifies a list of extensions to use for the session. | Extension names | If not provided, all available extensions are used. Use the special term `qwen -e none` to disable all extensions. Example: `qwen -e my-extension -e my-other-extension` |
| `--list-extensions` | `-l` | Lists all available extensions and exits. | | |
| `--proxy` | | Sets the proxy for the CLI. | Proxy URL | Example: `--proxy http://localhost:7890`. |
| `--include-directories` | | Includes additional directories in the workspace for multi-directory support. | Directory paths | Can be specified multiple times or as comma-separated values. 5 directories can be added at maximum. Example: `--include-directories /path/to/project1,/path/to/project2` or `--include-directories /path/to/project1 --include-directories /path/to/project2` |
| `--screen-reader` | | Enables screen reader mode, which adjusts the TUI for better compatibility with screen readers. | | |
| `--version` | | Displays the version of the CLI. | | |
| `--openai-logging` | | Enables logging of OpenAI API calls for debugging and analysis. | | This flag overrides the `enableOpenAILogging` setting in `settings.json`. |
| `--openai-logging-dir` | | Sets a custom directory path for OpenAI API logs. | Directory path | This flag overrides the `openAILoggingDir` setting in `settings.json`. Supports absolute paths, relative paths, and `~` expansion. Example: `qwen --openai-logging-dir "~/qwen-logs" --openai-logging` |
| `--tavily-api-key` | | Sets the Tavily API key for web search functionality for this session. | API key | Example: `qwen --tavily-api-key tvly-your-api-key-here` |
## Context Files (Hierarchical Instructional Context)
@@ -438,11 +501,11 @@ This example demonstrates how you can provide general project context, specific
- Location: The CLI also scans for the configured context file in subdirectories _below_ the current working directory (respecting common ignore patterns like `node_modules`, `.git`, etc.). The breadth of this search is limited to 200 directories by default, but can be configured with the `context.discoveryMaxDirs` setting in your `settings.json` file.
- Scope: Allows for highly specific instructions relevant to a particular component, module, or subsection of your project.
- **Concatenation & UI Indication:** The contents of all found context files are concatenated (with separators indicating their origin and path) and provided as part of the system prompt. The CLI footer displays the count of loaded context files, giving you a quick visual cue about the active instructional context.
- **Importing Content:** You can modularize your context files by importing other Markdown files using the `@path/to/file.md` syntax. For more details, see the [Memory Import Processor documentation](/users/configuration/memory).
- **Importing Content:** You can modularize your context files by importing other Markdown files using the `@path/to/file.md` syntax. For more details, see the [Memory Import Processor documentation](../configuration/memory).
- **Commands for Memory Management:**
- Use `/memory refresh` to force a re-scan and reload of all context files from all configured locations. This updates the AI's instructional context.
- Use `/memory show` to display the combined instructional context currently loaded, allowing you to verify the hierarchy and content being used by the AI.
- See the [Commands documentation](/users/reference/cli-reference) for full details on the `/memory` command and its sub-commands (`show` and `refresh`).
- See the [Commands documentation](../features/commands) for full details on the `/memory` command and its sub-commands (`show` and `refresh`).
By understanding and utilizing these configuration layers and the hierarchical nature of context files, you can effectively manage the AI's memory and tailor Qwen Code's responses to your specific needs and projects.
@@ -450,7 +513,7 @@ By understanding and utilizing these configuration layers and the hierarchical n
Qwen Code can execute potentially unsafe operations (like shell commands and file modifications) within a sandboxed environment to protect your system.
[Sandbox](/users/features/sandbox) is disabled by default, but you can enable it in a few ways:
[Sandbox](../features/sandbox) is disabled by default, but you can enable it in a few ways:
- Using `--sandbox` or `-s` flag.
- Setting `GEMINI_SANDBOX` environment variable.

View File

@@ -32,7 +32,7 @@ Qwen Code comes with a selection of pre-defined themes, which you can list using
### Theme Persistence
Selected themes are saved in Qwen Code's [configuration](./configuration.md) so your preference is remembered across sessions.
Selected themes are saved in Qwen Code's [configuration](../configuration/settings) so your preference is remembered across sessions.
---
@@ -146,7 +146,7 @@ The theme file must be a valid JSON file that follows the same structure as a cu
- Select your custom theme using the `/theme` command in Qwen Code. Your custom theme will appear in the theme selection dialog.
- Or, set it as the default by adding `"theme": "MyCustomTheme"` to the `ui` object in your `settings.json`.
- Custom themes can be set at the user, project, or system level, and follow the same [configuration precedence](./configuration.md) as other settings.
- Custom themes can be set at the user, project, or system level, and follow the same [configuration precedence](../configuration/settings) as other settings.
## Themes Preview

View File

@@ -56,6 +56,6 @@ If you need to change a decision or see all your settings, you have a couple of
For advanced users, it's helpful to know the exact order of operations for how trust is determined:
1. **IDE Trust Signal**: If you are using the [IDE Integration](/users/ide-integration/ide-integration), the CLI first asks the IDE if the workspace is trusted. The IDE's response takes highest priority.
1. **IDE Trust Signal**: If you are using the [IDE Integration](../ide-integration/ide-integration), the CLI first asks the IDE if the workspace is trusted. The IDE's response takes highest priority.
2. **Local Trust File**: If the IDE is not connected, the CLI checks the central `~/.qwen/trustedFolders.json` file.

View File

@@ -1,3 +1,5 @@
# Approval Mode
Qwen Code offers three distinct permission modes that allow you to flexibly control how AI interacts with your code and system based on task complexity and risk level.
## Permission Modes Comparison

View File

@@ -203,7 +203,7 @@ Key command-line options for headless usage:
| `--continue` | Resume the most recent session for this project | `qwen --continue -p "Pick up where we left off"` |
| `--resume [sessionId]` | Resume a specific session (or choose interactively) | `qwen --resume 123e... -p "Finish the refactor"` |
For complete details on all available configuration options, settings files, and environment variables, see the [Configuration Guide](/users/configuration/settings).
For complete details on all available configuration options, settings files, and environment variables, see the [Configuration Guide](../configuration/settings).
## Examples
@@ -276,7 +276,7 @@ tail -5 usage.log
## Resources
- [CLI Configuration](/users/configuration/settings#command-line-arguments) - Complete configuration guide
- [Authentication](/users/configuration/settings#environment-variables-for-api-access) - Setup authentication
- [Commands](/users/reference/cli-reference) - Interactive commands reference
- [Tutorials](/users/quickstart) - Step-by-step automation guides
- [CLI Configuration](../configuration/settings#command-line-arguments) - Complete configuration guide
- [Authentication](../configuration/settings#environment-variables-for-api-access) - Setup authentication
- [Commands](../features/commands) - Interactive commands reference
- [Tutorials](../quickstart) - Step-by-step automation guides

View File

@@ -12,6 +12,7 @@ With MCP servers connected, you can ask Qwen Code to:
- Automate workflows (repeatable tasks exposed as tools/prompts)
> [!tip]
>
> If youre looking for the “one command to get started”, jump to [Quick start](#quick-start).
## Quick start
@@ -51,7 +52,8 @@ qwen mcp add --scope user --transport http my-server http://localhost:3000/mcp
```
> [!tip]
> For advanced configuration layers (system defaults/system settings and precedence rules), see [Settings](/users/configuration/settings).
>
> For advanced configuration layers (system defaults/system settings and precedence rules), see [Settings](../configuration/settings).
## Configure servers
@@ -64,6 +66,7 @@ qwen mcp add --scope user --transport http my-server http://localhost:3000/mcp
| `stdio` | Local process (scripts, CLIs, Docker) on your machine | `command`, `args` (+ optional `cwd`, `env`) |
> [!note]
>
> If a server supports both, prefer **HTTP** over **SSE**.
### Configure via `settings.json` vs `qwen mcp add`

View File

@@ -220,6 +220,6 @@ qwen -s -p "run shell command: mount | grep workspace"
## Related documentation
- [Configuration](/users/configuration/settings): Full configuration options.
- [Commands](/users/reference/cli-reference): Available commands.
- [Troubleshooting](/users/support/troubleshooting): General troubleshooting.
- [Configuration](../configuration/settings): Full configuration options.
- [Commands](../features/commands): Available commands.
- [Troubleshooting](../support/troubleshooting): General troubleshooting.

View File

@@ -2,7 +2,7 @@
Qwen Code can integrate with your IDE to provide a more seamless and context-aware experience. This integration allows the CLI to understand your workspace better and enables powerful features like native in-editor diffing.
Currently, the only supported IDE is [Visual Studio Code](https://code.visualstudio.com/) and other editors that support VS Code extensions. To build support for other editors, see the [IDE Companion Extension Spec](/users/ide-integration/ide-companion-spec).
Currently, the only supported IDE is [Visual Studio Code](https://code.visualstudio.com/) and other editors that support VS Code extensions. To build support for other editors, see the [IDE Companion Extension Spec](../ide-integration/ide-companion-spec).
## Features

View File

@@ -6,41 +6,14 @@
Use it to perform GitHub pull request reviews, triage issues, perform code analysis and modification, and more using [Qwen Code] conversationally (e.g., `@qwencoder fix this issue`) directly inside your GitHub repositories.
- [qwen-code-action](#qwen-code-action)
- [Overview](#overview)
- [Features](#features)
- [Quick Start](#quick-start)
- [1. Get a Qwen API Key](#1-get-a-qwen-api-key)
- [2. Add it as a GitHub Secret](#2-add-it-as-a-github-secret)
- [3. Update your .gitignore](#3-update-your-gitignore)
- [4. Choose a Workflow](#4-choose-a-workflow)
- [5. Try it out](#5-try-it-out)
- [Workflows](#workflows)
- [Qwen Code Dispatch](#qwen-code-dispatch)
- [Issue Triage](#issue-triage)
- [Pull Request Review](#pull-request-review)
- [Qwen Code CLI Assistant](#qwen-code-cli-assistant)
- [Configuration](#configuration)
- [Inputs](#inputs)
- [Outputs](#outputs)
- [Repository Variables](#repository-variables)
- [Secrets](#secrets)
- [Authentication](#authentication)
- [GitHub Authentication](#github-authentication)
- [Extensions](#extensions)
- [Best Practices](#best-practices)
- [Customization](#customization)
- [Contributing](#contributing)
## Features
- **Automation**: Trigger workflows based on events (e.g. issue opening) or schedules (e.g. nightly).
- **On-demand Collaboration**: Trigger workflows in issue and pull request
comments by mentioning the [Qwen Code CLI] (e.g., `@qwencoder /review`).
- **Extensible with Tools**: Leverage [Qwen Code] models' tool-calling capabilities to
interact with other CLIs like the [GitHub CLI] (`gh`).
comments by mentioning the [Qwen Code CLI](./features/commands) (e.g., `@qwencoder /review`).
- **Extensible with Tools**: Leverage [Qwen Code](../developers/tools/introduction.md) models' tool-calling capabilities to interact with other CLIs like the [GitHub CLI] (`gh`).
- **Customizable**: Use a `QWEN.md` file in your repository to provide
project-specific instructions and context to [Qwen Code CLI].
project-specific instructions and context to [Qwen Code CLI](./features/commands).
## Quick Start
@@ -48,7 +21,7 @@ Get started with Qwen Code CLI in your repository in just a few minutes:
### 1. Get a Qwen API Key
Obtain your API key from [DashScope] (Alibaba Cloud's AI platform)
Obtain your API key from [DashScope](https://help.aliyun.com/zh/model-studio/qwen-code) (Alibaba Cloud's AI platform)
### 2. Add it as a GitHub Secret
@@ -90,7 +63,7 @@ You have two options to set up a workflow:
**Option B: Manually copy workflows**
1. Copy the pre-built workflows from the [`examples/workflows`](./examples/workflows) directory to your repository's `.github/workflows` directory. Note: the `qwen-dispatch.yml` workflow must also be copied, which triggers the workflows to run.
1. Copy the pre-built workflows from the [`examples/workflows`](./common-workflow) directory to your repository's `.github/workflows` directory. Note: the `qwen-dispatch.yml` workflow must also be copied, which triggers the workflows to run.
### 5. Try it out
@@ -119,30 +92,19 @@ This action provides several pre-built workflows for different use cases. Each w
### Qwen Code Dispatch
This workflow acts as a central dispatcher for Qwen Code CLI, routing requests to
the appropriate workflow based on the triggering event and the command provided
in the comment. For a detailed guide on how to set up the dispatch workflow, go
to the
[Qwen Code Dispatch workflow documentation](./examples/workflows/qwen-dispatch).
This workflow acts as a central dispatcher for Qwen Code CLI, routing requests to the appropriate workflow based on the triggering event and the command provided in the comment. For a detailed guide on how to set up the dispatch workflow, go to the [Qwen Code Dispatch workflow documentation](./common-workflow).
### Issue Triage
This action can be used to triage GitHub Issues automatically or on a schedule.
For a detailed guide on how to set up the issue triage system, go to the
[GitHub Issue Triage workflow documentation](./examples/workflows/issue-triage).
This action can be used to triage GitHub Issues automatically or on a schedule. For a detailed guide on how to set up the issue triage system, go to the [GitHub Issue Triage workflow documentation](./examples/workflows/issue-triage).
### Pull Request Review
This action can be used to automatically review pull requests when they are
opened. For a detailed guide on how to set up the pull request review system,
go to the [GitHub PR Review workflow documentation](./examples/workflows/pr-review).
This action can be used to automatically review pull requests when they are opened. For a detailed guide on how to set up the pull request review system, go to the [GitHub PR Review workflow documentation](./common-workflow).
### Qwen Code CLI Assistant
This type of action can be used to invoke a general-purpose, conversational Qwen Code
AI assistant within the pull requests and issues to perform a wide range of
tasks. For a detailed guide on how to set up the general-purpose Qwen Code CLI workflow,
go to the [Qwen Code Assistant workflow documentation](./examples/workflows/qwen-assistant).
This type of action can be used to invoke a general-purpose, conversational Qwen Code AI assistant within the pull requests and issues to perform a wide range of tasks. For a detailed guide on how to set up the general-purpose Qwen Code CLI workflow, go to the [Qwen Code Assistant workflow documentation](./common-workflow).
## Configuration
@@ -222,8 +184,7 @@ To add a secret:
2. Enter the secret name and value.
3. Save.
For more information, refer to the
[official GitHub documentation on creating and using encrypted secrets][secrets].
For more information, refer to the [official GitHub documentation on creating and using encrypted secrets][secrets].
## Authentication
@@ -239,7 +200,7 @@ You can authenticate with GitHub in two ways:
authentication, we recommend creating a custom GitHub App.
For detailed setup instructions for both Qwen and GitHub authentication, go to the
[**Authentication documentation**](./docs/authentication.md).
[**Authentication documentation**](./configuration/auth).
## Extensions
@@ -247,7 +208,7 @@ The Qwen Code CLI can be extended with additional functionality through extensio
These extensions are installed from source from their GitHub repositories.
For detailed instructions on how to set up and configure extensions, go to the
[Extensions documentation](./docs/extensions.md).
[Extensions documentation](../developers/extensions/extension).
## Best Practices
@@ -258,20 +219,18 @@ Key recommendations include:
- **Securing Your Repository:** Implementing branch and tag protection, and restricting pull request approvers.
- **Monitoring and Auditing:** Regularly reviewing action logs and enabling OpenTelemetry for deeper insights into performance and behavior.
For a comprehensive guide on securing your repository and workflows, please refer to our [**Best Practices documentation**](./docs/best-practices.md).
For a comprehensive guide on securing your repository and workflows, please refer to our [**Best Practices documentation**](./common-workflow).
## Customization
Create a [QWEN.md] file in the root of your repository to provide
project-specific context and instructions to [Qwen Code CLI]. This is useful for defining
Create a QWEN.md file in the root of your repository to provide
project-specific context and instructions to [Qwen Code CLI](./common-workflow). This is useful for defining
coding conventions, architectural patterns, or other guidelines the model should
follow for a given repository.
## Contributing
Contributions are welcome! Check out the Qwen Code CLI
[**Contributing Guide**](./CONTRIBUTING.md) for more details on how to get
started.
Contributions are welcome! Check out the Qwen Code CLI **Contributing Guide** for more details on how to get started.
[secrets]: https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions
[Qwen Code]: https://github.com/QwenLM/qwen-code

View File

@@ -4,7 +4,7 @@
<br/>
<video src="https://cloud.video.taobao.com/vod/JnvYMhUia2EKFAaiuErqNpzWE9mz3odG76vArAHNg94.mp4" controls width="800">
<video src="https://cloud.video.taobao.com/vod/IKKwfM-kqNI3OJjM_U8uMCSMAoeEcJhs6VNCQmZxUfk.mp4" controls width="800">
Your browser does not support the video tag.
</video>

View File

@@ -7,7 +7,7 @@
### Features
- **Native agent experience**: Integrated AI assistant panel within Zed's interface
- **Agent Control Protocol**: Full support for ACP enabling advanced IDE interactions
- **Agent Client Protocol**: Full support for ACP enabling advanced IDE interactions
- **File management**: @-mention files to add them to the conversation context
- **Conversation history**: Access to past conversations within Zed

View File

@@ -36,13 +36,13 @@ Select **Qwen OAuth (Free)** authentication and follow the prompts to log in. Th
what does this project do?
```
![](https://gw.alicdn.com/imgextra/i2/O1CN01XoPbZm1CrsZzvMQ6m_!!6000000000135-1-tps-772-646.gif)
![](https://cloud.video.taobao.com/vod/j7-QtQScn8UEAaEdiv619fSkk5p-t17orpDbSqKVL5A.mp4)
You'll be prompted to log in on first use. That's it! [Continue with Quickstart (5 mins) →](/users/quickstart)
You'll be prompted to log in on first use. That's it! [Continue with Quickstart (5 mins) →](./quickstart)
> [!tip]
>
> See [troubleshooting](/users/support/troubleshooting) if you hit issues.
> See [troubleshooting](./support/troubleshooting) if you hit issues.
> [!note]
>
@@ -52,11 +52,11 @@ You'll be prompted to log in on first use. That's it! [Continue with Quickstart
- **Build features from descriptions**: Tell Qwen Code what you want to build in plain language. It will make a plan, write the code, and ensure it works.
- **Debug and fix issues**: Describe a bug or paste an error message. Qwen Code will analyze your codebase, identify the problem, and implement a fix.
- **Navigate any codebase**: Ask anything about your team's codebase, and get a thoughtful answer back. Qwen Code maintains awareness of your entire project structure, can find up-to-date information from the web, and with [MCP](/users/features/mcp) can pull from external datasources like Google Drive, Figma, and Slack.
- **Navigate any codebase**: Ask anything about your team's codebase, and get a thoughtful answer back. Qwen Code maintains awareness of your entire project structure, can find up-to-date information from the web, and with [MCP](./features/mcp) can pull from external datasources like Google Drive, Figma, and Slack.
- **Automate tedious tasks**: Fix fiddly lint issues, resolve merge conflicts, and write release notes. Do all this in a single command from your developer machines, or automatically in CI.
## Why developers love Qwen Code
- **Works in your terminal**: Not another chat window. Not another IDE. Qwen Code meets you where you already work, with the tools you already love.
- **Takes action**: Qwen Code can directly edit files, run commands, and create commits. Need more? [MCP](/users/features/mcp) lets Qwen Code read your design docs in Google Drive, update your tickets in Jira, or use _your_ custom developer tooling.
- **Takes action**: Qwen Code can directly edit files, run commands, and create commits. Need more? [MCP](./features/mcp) lets Qwen Code read your design docs in Google Drive, update your tickets in Jira, or use _your_ custom developer tooling.
- **Unix philosophy**: Qwen Code is composable and scriptable. `tail -f app.log | qwen -p "Slack me if you see any anomalies appear in this log stream"` _works_. Your CI can run `qwen -p "If there are new text strings, translate them into French and raise a PR for @lang-fr-team to review"`.

View File

@@ -206,7 +206,7 @@ Here are the most important commands for daily use:
| → `output [language]` | Set LLM output language | `/language output Chinese` |
| `/quit` | Exit Qwen Code immediately | `/quit` or `/exit` |
See the [CLI reference](/users/reference/cli-reference) for a complete list of commands.
See the [CLI reference](./features/commands) for a complete list of commands.
## Pro tips for beginners
@@ -225,9 +225,9 @@ See the [CLI reference](/users/reference/cli-reference) for a complete list of c
3. build a webpage that allows users to see and edit their information
```
**Let Claude explore first**
**Let Qwen Code explore first**
- Before making changes, let Claude understand your code:
- Before making changes, let Qwen Code understand your code:
```
analyze the database schema

View File

@@ -23,7 +23,7 @@ When you authenticate using your qwen.ai account, these Terms of Service and Pri
- **Terms of Service:** Your use is governed by the [Qwen Terms of Service](https://qwen.ai/termsservice).
- **Privacy Notice:** The collection and use of your data is described in the [Qwen Privacy Policy](https://qwen.ai/privacypolicy).
For details about authentication setup, quotas, and supported features, see [Authentication Setup](/users/configuration/settings).
For details about authentication setup, quotas, and supported features, see [Authentication Setup](../configuration/settings).
## 2. If you are using OpenAI-Compatible API Authentication
@@ -37,7 +37,7 @@ Qwen Code supports various OpenAI-compatible providers. Please refer to your spe
## Usage Statistics and Telemetry
Qwen Code may collect anonymous usage statistics and [telemetry](/developers/development/telemetry) data to improve the user experience and product quality. This data collection is optional and can be controlled through configuration settings.
Qwen Code may collect anonymous usage statistics and [telemetry](../../developers/development/telemetry) data to improve the user experience and product quality. This data collection is optional and can be controlled through configuration settings.
### What Data is Collected
@@ -91,4 +91,4 @@ You can switch between Qwen OAuth and OpenAI-compatible API authentication at an
2. **Within the CLI**: Use the `/auth` command to reconfigure your authentication method
3. **Environment variables**: Set up `.env` files for automatic OpenAI-compatible API authentication
For detailed instructions, see the [Authentication Setup](/users/configuration/settings#environment-variables-for-api-access) documentation.
For detailed instructions, see the [Authentication Setup](../configuration/settings#environment-variables-for-api-access) documentation.

View File

@@ -31,7 +31,7 @@ This guide provides solutions to common issues and debugging tips, including top
1. In your home directory: `~/.qwen/settings.json`.
2. In your project's root directory: `./.qwen/settings.json`.
Refer to [Qwen Code Configuration](/users/configuration/settings) for more details.
Refer to [Qwen Code Configuration](../configuration/settings) for more details.
- **Q: Why don't I see cached token counts in my stats output?**
- A: Cached token information is only displayed when cached tokens are being used. This feature is available for API key users (Qwen API key or Google Cloud Vertex AI) but not for OAuth users (such as Google Personal/Enterprise accounts like Google Gmail or Google Workspace, respectively). This is because the Qwen Code Assist API does not support cached content creation. You can still view your total token usage using the `/stats` command.
@@ -59,7 +59,7 @@ This guide provides solutions to common issues and debugging tips, including top
- **Error: "Operation not permitted", "Permission denied", or similar.**
- **Cause:** When sandboxing is enabled, Qwen Code may attempt operations that are restricted by your sandbox configuration, such as writing outside the project directory or system temp directory.
- **Solution:** Refer to the [Configuration: Sandboxing](/users/features/sandbox) documentation for more information, including how to customize your sandbox configuration.
- **Solution:** Refer to the [Configuration: Sandboxing](../features/sandbox) documentation for more information, including how to customize your sandbox configuration.
- **Qwen Code is not running in interactive mode in "CI" environments**
- **Issue:** Qwen Code does not enter interactive mode (no prompt appears) if an environment variable starting with `CI_` (e.g. `CI_TOKEN`) is set. This is because the `is-in-ci` package, used by the underlying UI framework, detects these variables and assumes a non-interactive CI environment.

View File

@@ -29,6 +29,7 @@ import {
} from '@qwen-code/qwen-code-core';
import { extensionsCommand } from '../commands/extensions.js';
import type { Settings } from './settings.js';
import { getModelProvidersConfigFromSettings } from './settings.js';
import yargs, { type Argv } from 'yargs';
import { hideBin } from 'yargs/helpers';
import * as fs from 'node:fs';
@@ -864,11 +865,16 @@ export async function loadCliConfig(
);
}
const resolvedModel =
argv.model ||
process.env['OPENAI_MODEL'] ||
process.env['QWEN_MODEL'] ||
settings.model?.name;
let resolvedModel: string | undefined;
if (argv.model) {
resolvedModel = argv.model;
} else {
resolvedModel =
process.env['OPENAI_MODEL'] ||
process.env['QWEN_MODEL'] ||
settings.model?.name;
}
const sandboxConfig = await loadSandboxConfig(settings, argv);
const screenReader =
@@ -902,6 +908,8 @@ export async function loadCliConfig(
}
}
const modelProvidersConfig = getModelProvidersConfigFromSettings(settings);
return new Config({
sessionId,
sessionData,
@@ -960,6 +968,7 @@ export async function loadCliConfig(
inputFormat,
outputFormat,
includePartialMessages,
modelProvidersConfig,
generationConfig: {
...(settings.model?.generationConfig || {}),
model: resolvedModel,

View File

@@ -14,6 +14,11 @@ import {
QWEN_DIR,
getErrorMessage,
Storage,
type AuthType,
type ProviderModelConfig as ModelConfig,
type ModelProvidersConfig,
type ModelCapabilities,
type ModelGenerationConfig,
} from '@qwen-code/qwen-code-core';
import stripJsonComments from 'strip-json-comments';
import { DefaultLight } from '../ui/themes/default-light.js';
@@ -47,7 +52,14 @@ function getMergeStrategyForPath(path: string[]): MergeStrategy | undefined {
return current?.mergeStrategy;
}
export type { Settings, MemoryImportFormat };
export type {
Settings,
MemoryImportFormat,
ModelConfig,
ModelProvidersConfig,
ModelCapabilities,
ModelGenerationConfig,
};
export const SETTINGS_DIRECTORY_NAME = '.qwen';
export const USER_SETTINGS_PATH = Storage.getGlobalSettingsPath();
@@ -862,3 +874,31 @@ export function saveSettings(settingsFile: SettingsFile): void {
throw error;
}
}
/**
* Get models configuration from settings, grouped by authType.
* Returns the models config from the merged settings without mutating files.
*
* @param settings - The merged settings object
* @returns ModelProvidersConfig object (keyed by authType) or empty object if not configured
*/
export function getModelProvidersConfigFromSettings(
settings: Settings,
): ModelProvidersConfig {
return (settings.modelProviders as ModelProvidersConfig) || {};
}
/**
* Get models for a specific authType from settings.
*
* @param settings - The merged settings object
* @param authType - The authType to get models for
* @returns Array of ModelConfig for the authType, or empty array if not configured
*/
export function getModelsForAuthType(
settings: Settings,
authType: string,
): ModelConfig[] {
const modelProvidersConfig = getModelProvidersConfigFromSettings(settings);
return modelProvidersConfig[authType as AuthType] || [];
}

View File

@@ -10,6 +10,7 @@ import type {
TelemetrySettings,
AuthType,
ChatCompressionSettings,
ModelProvidersConfig,
} from '@qwen-code/qwen-code-core';
import {
ApprovalMode,
@@ -102,6 +103,19 @@ const SETTINGS_SCHEMA = {
mergeStrategy: MergeStrategy.SHALLOW_MERGE,
},
// Model providers configuration grouped by authType
modelProviders: {
type: 'object',
label: 'Model Providers',
category: 'Model',
requiresRestart: false,
default: {} as ModelProvidersConfig,
description:
'Model providers configuration grouped by authType. Each authType contains an array of model configurations.',
showInDialog: false,
mergeStrategy: MergeStrategy.SHALLOW_MERGE,
},
general: {
type: 'object',
label: 'General',
@@ -659,6 +673,22 @@ const SETTINGS_SCHEMA = {
childKey: 'disableCacheControl',
showInDialog: true,
},
schemaCompliance: {
type: 'enum',
label: 'Tool Schema Compliance',
category: 'Generation Configuration',
requiresRestart: false,
default: 'auto',
description:
'The compliance mode for tool schemas sent to the model. Use "openapi_30" for strict OpenAPI 3.0 compatibility (e.g., for Gemini).',
parentKey: 'generationConfig',
childKey: 'schemaCompliance',
showInDialog: true,
options: [
{ value: 'auto', label: 'Auto (Default)' },
{ value: 'openapi_30', label: 'OpenAPI 3.0 Strict' },
],
},
},
},
},

View File

@@ -310,6 +310,7 @@ export default {
'Tool Output Truncation Lines': 'Tool Output Truncation Lines',
'Folder Trust': 'Folder Trust',
'Vision Model Preview': 'Vision Model Preview',
'Tool Schema Compliance': 'Tool Schema Compliance',
// Settings enum options
'Auto (detect from system)': 'Auto (detect from system)',
Text: 'Text',

View File

@@ -300,6 +300,7 @@ export default {
'Tool Output Truncation Lines': '工具输出截断行数',
'Folder Trust': '文件夹信任',
'Vision Model Preview': '视觉模型预览',
'Tool Schema Compliance': '工具 Schema 兼容性',
// Settings enum options
'Auto (detect from system)': '自动(从系统检测)',
Text: '文本',

View File

@@ -52,7 +52,7 @@ export const modelCommand: SlashCommand = {
};
}
const availableModels = getAvailableModelsForAuthType(authType);
const availableModels = getAvailableModelsForAuthType(authType, config);
if (availableModels.length === 0) {
return {

View File

@@ -40,7 +40,8 @@ const renderComponent = (
? ({
// --- Functions used by ModelDialog ---
getModel: vi.fn(() => MAINLINE_CODER),
setModel: vi.fn(),
setModel: vi.fn().mockResolvedValue(undefined),
switchModel: vi.fn().mockResolvedValue(undefined),
getAuthType: vi.fn(() => 'qwen-oauth'),
// --- Functions used by ClearcutLogger ---
@@ -139,16 +140,19 @@ describe('<ModelDialog />', () => {
expect(mockedSelect).toHaveBeenCalledTimes(1);
});
it('calls config.setModel and onClose when DescriptiveRadioButtonSelect.onSelect is triggered', () => {
it('calls config.switchModel and onClose when DescriptiveRadioButtonSelect.onSelect is triggered', async () => {
const { props, mockConfig } = renderComponent({}, {}); // Pass empty object for contextValue
const childOnSelect = mockedSelect.mock.calls[0][0].onSelect;
expect(childOnSelect).toBeDefined();
childOnSelect(MAINLINE_CODER);
await childOnSelect(MAINLINE_CODER);
// Assert against the default mock provided by renderComponent
expect(mockConfig?.setModel).toHaveBeenCalledWith(MAINLINE_CODER);
// Assert that switchModel is called with the model and metadata
expect(mockConfig?.switchModel).toHaveBeenCalledWith(MAINLINE_CODER, {
reason: 'user_manual',
context: 'Model switched via /model dialog',
});
expect(props.onClose).toHaveBeenCalledTimes(1);
});

View File

@@ -29,13 +29,11 @@ interface ModelDialogProps {
export function ModelDialog({ onClose }: ModelDialogProps): React.JSX.Element {
const config = useContext(ConfigContext);
// Get auth type from config, default to QWEN_OAUTH if not available
const authType = config?.getAuthType() ?? AuthType.QWEN_OAUTH;
// Get available models based on auth type
const availableModels = useMemo(
() => getAvailableModelsForAuthType(authType),
[authType],
() => getAvailableModelsForAuthType(authType, config ?? undefined),
[authType, config],
);
const MODEL_OPTIONS = useMemo(
@@ -49,7 +47,6 @@ export function ModelDialog({ onClose }: ModelDialogProps): React.JSX.Element {
[availableModels],
);
// Determine the Preferred Model (read once when the dialog opens).
const preferredModel = config?.getModel() || MAINLINE_CODER;
useKeypress(
@@ -61,17 +58,18 @@ export function ModelDialog({ onClose }: ModelDialogProps): React.JSX.Element {
{ isActive: true },
);
// Calculate the initial index based on the preferred model.
const initialIndex = useMemo(
() => MODEL_OPTIONS.findIndex((option) => option.value === preferredModel),
[MODEL_OPTIONS, preferredModel],
);
// Handle selection internally (Autonomous Dialog).
const handleSelect = useCallback(
(model: string) => {
async (model: string) => {
if (config) {
config.setModel(model);
await config.switchModel(model, {
reason: 'user_manual',
context: 'Model switched via /model dialog',
});
const event = new ModelSlashCommandEvent(model);
logModelSlashCommand(config, event);
}

View File

@@ -0,0 +1,203 @@
/**
* @license
* Copyright 2025 Qwen Team
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import {
getAvailableModelsForAuthType,
getFilteredQwenModels,
getOpenAIAvailableModelFromEnv,
isVisionModel,
getDefaultVisionModel,
AVAILABLE_MODELS_QWEN,
MAINLINE_VLM,
MAINLINE_CODER,
} from './availableModels.js';
import { AuthType, type Config } from '@qwen-code/qwen-code-core';
describe('availableModels', () => {
describe('AVAILABLE_MODELS_QWEN', () => {
it('should include coder model', () => {
const coderModel = AVAILABLE_MODELS_QWEN.find(
(m) => m.id === MAINLINE_CODER,
);
expect(coderModel).toBeDefined();
expect(coderModel?.isVision).toBeFalsy();
});
it('should include vision model', () => {
const visionModel = AVAILABLE_MODELS_QWEN.find(
(m) => m.id === MAINLINE_VLM,
);
expect(visionModel).toBeDefined();
expect(visionModel?.isVision).toBe(true);
});
});
describe('getFilteredQwenModels', () => {
it('should return all models when vision preview is enabled', () => {
const models = getFilteredQwenModels(true);
expect(models.length).toBe(AVAILABLE_MODELS_QWEN.length);
});
it('should filter out vision models when preview is disabled', () => {
const models = getFilteredQwenModels(false);
expect(models.every((m) => !m.isVision)).toBe(true);
});
});
describe('getOpenAIAvailableModelFromEnv', () => {
const originalEnv = process.env;
beforeEach(() => {
process.env = { ...originalEnv };
});
afterEach(() => {
process.env = originalEnv;
});
it('should return null when OPENAI_MODEL is not set', () => {
delete process.env['OPENAI_MODEL'];
expect(getOpenAIAvailableModelFromEnv()).toBeNull();
});
it('should return model from OPENAI_MODEL env var', () => {
process.env['OPENAI_MODEL'] = 'gpt-4-turbo';
const model = getOpenAIAvailableModelFromEnv();
expect(model?.id).toBe('gpt-4-turbo');
expect(model?.label).toBe('gpt-4-turbo');
});
it('should trim whitespace from env var', () => {
process.env['OPENAI_MODEL'] = ' gpt-4 ';
const model = getOpenAIAvailableModelFromEnv();
expect(model?.id).toBe('gpt-4');
});
});
describe('getAvailableModelsForAuthType', () => {
const originalEnv = process.env;
beforeEach(() => {
process.env = { ...originalEnv };
});
afterEach(() => {
process.env = originalEnv;
});
it('should return hard-coded qwen models for qwen-oauth', () => {
const models = getAvailableModelsForAuthType(AuthType.QWEN_OAUTH);
expect(models).toEqual(AVAILABLE_MODELS_QWEN);
});
it('should return hard-coded qwen models even when config is provided', () => {
const mockConfig = {
getAvailableModels: vi
.fn()
.mockReturnValue([
{ id: 'custom', label: 'Custom', authType: AuthType.QWEN_OAUTH },
]),
} as unknown as Config;
const models = getAvailableModelsForAuthType(
AuthType.QWEN_OAUTH,
mockConfig,
);
expect(models).toEqual(AVAILABLE_MODELS_QWEN);
});
it('should use config.getAvailableModels for openai authType when available', () => {
const mockModels = [
{
id: 'gpt-4',
label: 'GPT-4',
description: 'Test',
authType: AuthType.USE_OPENAI,
isVision: false,
},
];
const mockConfig = {
getAvailableModels: vi.fn().mockReturnValue(mockModels),
} as unknown as Config;
const models = getAvailableModelsForAuthType(
AuthType.USE_OPENAI,
mockConfig,
);
expect(mockConfig.getAvailableModels).toHaveBeenCalled();
expect(models[0].id).toBe('gpt-4');
});
it('should fallback to env var for openai when config returns empty', () => {
process.env['OPENAI_MODEL'] = 'fallback-model';
const mockConfig = {
getAvailableModels: vi.fn().mockReturnValue([]),
} as unknown as Config;
const models = getAvailableModelsForAuthType(
AuthType.USE_OPENAI,
mockConfig,
);
expect(models[0].id).toBe('fallback-model');
});
it('should fallback to env var for openai when config throws', () => {
process.env['OPENAI_MODEL'] = 'fallback-model';
const mockConfig = {
getAvailableModels: vi.fn().mockImplementation(() => {
throw new Error('Registry not initialized');
}),
} as unknown as Config;
const models = getAvailableModelsForAuthType(
AuthType.USE_OPENAI,
mockConfig,
);
expect(models[0].id).toBe('fallback-model');
});
it('should return env model for openai without config', () => {
process.env['OPENAI_MODEL'] = 'gpt-4-turbo';
const models = getAvailableModelsForAuthType(AuthType.USE_OPENAI);
expect(models[0].id).toBe('gpt-4-turbo');
});
it('should return empty array for openai without config or env', () => {
delete process.env['OPENAI_MODEL'];
const models = getAvailableModelsForAuthType(AuthType.USE_OPENAI);
expect(models).toEqual([]);
});
it('should return empty array for other auth types', () => {
const models = getAvailableModelsForAuthType(AuthType.USE_GEMINI);
expect(models).toEqual([]);
});
});
describe('isVisionModel', () => {
it('should return true for vision model', () => {
expect(isVisionModel(MAINLINE_VLM)).toBe(true);
});
it('should return false for non-vision model', () => {
expect(isVisionModel(MAINLINE_CODER)).toBe(false);
});
it('should return false for unknown model', () => {
expect(isVisionModel('unknown-model')).toBe(false);
});
});
describe('getDefaultVisionModel', () => {
it('should return the vision model ID', () => {
expect(getDefaultVisionModel()).toBe(MAINLINE_VLM);
});
});
});

View File

@@ -4,7 +4,12 @@
* SPDX-License-Identifier: Apache-2.0
*/
import { AuthType, DEFAULT_QWEN_MODEL } from '@qwen-code/qwen-code-core';
import {
AuthType,
DEFAULT_QWEN_MODEL,
type Config,
type AvailableModel as CoreAvailableModel,
} from '@qwen-code/qwen-code-core';
import { t } from '../../i18n/index.js';
export type AvailableModel = {
@@ -60,24 +65,56 @@ export function getOpenAIAvailableModelFromEnv(): AvailableModel | null {
return id ? { id, label: id } : null;
}
export function getAvailableModelsForAuthType(
authType: AuthType,
): AvailableModel[] {
switch (authType) {
case AuthType.QWEN_OAUTH:
return AVAILABLE_MODELS_QWEN;
case AuthType.USE_OPENAI: {
const openAIModel = getOpenAIAvailableModelFromEnv();
return openAIModel ? [openAIModel] : [];
}
default:
// For other auth types, return empty array for now
// This can be expanded later according to the design doc
return [];
}
/**
* Convert core AvailableModel to CLI AvailableModel format
*/
function convertCoreModelToCliModel(
coreModel: CoreAvailableModel,
): AvailableModel {
return {
id: coreModel.id,
label: coreModel.label,
description: coreModel.description,
isVision: coreModel.isVision ?? coreModel.capabilities?.vision ?? false,
};
}
/**
* Get available models for the given authType.
*
* If a Config object is provided, uses the model registry to get models.
* For qwen-oauth, always returns the hard-coded models.
* For openai authType, falls back to environment variable if no config provided.
*/
export function getAvailableModelsForAuthType(
authType: AuthType,
config?: Config,
): AvailableModel[] {
// For qwen-oauth, always use hard-coded models, this aligns with the API gateway.
if (authType === AuthType.QWEN_OAUTH) {
return AVAILABLE_MODELS_QWEN;
}
if (config) {
try {
const models = config.getAvailableModels();
if (models.length > 0) {
return models.map(convertCoreModelToCliModel);
}
} catch (error) {
console.error('Failed to get models from model registry', error);
}
}
if (authType === AuthType.USE_OPENAI) {
const openAIModel = getOpenAIAvailableModelFromEnv();
return openAIModel ? [openAIModel] : [];
}
// For other auth types, return empty array
return [];
}
/**
* Hard code the default vision model as a string literal,
* until our coding model supports multimodal.

View File

@@ -76,6 +76,105 @@ describe('getGitHubRepoInfo', async () => {
);
expect(getGitHubRepoInfo()).toEqual({ owner: 'owner', repo: 'repo' });
});
// Tests for credential formats
it('returns the owner and repo for URL with classic PAT token (ghp_)', async () => {
vi.mocked(child_process.execSync).mockReturnValueOnce(
'https://ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx@github.com/owner/repo.git',
);
expect(getGitHubRepoInfo()).toEqual({ owner: 'owner', repo: 'repo' });
});
it('returns the owner and repo for URL with fine-grained PAT token (github_pat_)', async () => {
vi.mocked(child_process.execSync).mockReturnValueOnce(
'https://github_pat_xxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx@github.com/owner/repo.git',
);
expect(getGitHubRepoInfo()).toEqual({ owner: 'owner', repo: 'repo' });
});
it('returns the owner and repo for URL with username:password format', async () => {
vi.mocked(child_process.execSync).mockReturnValueOnce(
'https://username:password@github.com/owner/repo.git',
);
expect(getGitHubRepoInfo()).toEqual({ owner: 'owner', repo: 'repo' });
});
it('returns the owner and repo for URL with OAuth token (oauth2:token)', async () => {
vi.mocked(child_process.execSync).mockReturnValueOnce(
'https://oauth2:gho_xxxxxxxxxxxx@github.com/owner/repo.git',
);
expect(getGitHubRepoInfo()).toEqual({ owner: 'owner', repo: 'repo' });
});
it('returns the owner and repo for URL with GitHub Actions token (x-access-token)', async () => {
vi.mocked(child_process.execSync).mockReturnValueOnce(
'https://x-access-token:ghs_xxxxxxxxxxxx@github.com/owner/repo.git',
);
expect(getGitHubRepoInfo()).toEqual({ owner: 'owner', repo: 'repo' });
});
// Tests for case insensitivity
it('returns the owner and repo for URL with uppercase GITHUB.COM', async () => {
vi.mocked(child_process.execSync).mockReturnValueOnce(
'https://GITHUB.COM/owner/repo.git',
);
expect(getGitHubRepoInfo()).toEqual({ owner: 'owner', repo: 'repo' });
});
it('returns the owner and repo for URL with mixed case GitHub.Com', async () => {
vi.mocked(child_process.execSync).mockReturnValueOnce(
'https://GitHub.Com/owner/repo.git',
);
expect(getGitHubRepoInfo()).toEqual({ owner: 'owner', repo: 'repo' });
});
// Tests for SSH format
it('returns the owner and repo for SSH URL', async () => {
vi.mocked(child_process.execSync).mockReturnValueOnce(
'git@github.com:owner/repo.git',
);
expect(getGitHubRepoInfo()).toEqual({ owner: 'owner', repo: 'repo' });
});
it('throws for non-GitHub SSH URL', async () => {
vi.mocked(child_process.execSync).mockReturnValueOnce(
'git@gitlab.com:owner/repo.git',
);
expect(() => {
getGitHubRepoInfo();
}).toThrowError(/Owner & repo could not be extracted from remote URL/);
});
// Tests for edge cases
it('returns the owner and repo for URL without .git suffix', async () => {
vi.mocked(child_process.execSync).mockReturnValueOnce(
'https://github.com/owner/repo',
);
expect(getGitHubRepoInfo()).toEqual({ owner: 'owner', repo: 'repo' });
});
it('throws for non-GitHub HTTPS URL', async () => {
vi.mocked(child_process.execSync).mockReturnValueOnce(
'https://gitlab.com/owner/repo.git',
);
expect(() => {
getGitHubRepoInfo();
}).toThrowError(/Owner & repo could not be extracted from remote URL/);
});
it('handles repo names containing .git substring', async () => {
vi.mocked(child_process.execSync).mockReturnValueOnce(
'https://github.com/owner/my.git.repo.git',
);
expect(getGitHubRepoInfo()).toEqual({
owner: 'owner',
repo: 'my.git.repo',
});
});
});
describe('getGitRepoRoot', async () => {

View File

@@ -103,17 +103,38 @@ export function getGitHubRepoInfo(): { owner: string; repo: string } {
encoding: 'utf-8',
}).trim();
// Matches either https://github.com/owner/repo.git or git@github.com:owner/repo.git
const match = remoteUrl.match(
/(?:https?:\/\/|git@)github\.com(?::|\/)([^/]+)\/([^/]+?)(?:\.git)?$/,
);
// If the regex fails match, throw an error.
if (!match || !match[1] || !match[2]) {
// Handle SCP-style SSH URLs (git@github.com:owner/repo.git)
let urlToParse = remoteUrl;
if (remoteUrl.startsWith('git@github.com:')) {
urlToParse = remoteUrl.replace('git@github.com:', '');
} else if (remoteUrl.startsWith('git@')) {
// SSH URL for a different provider (GitLab, Bitbucket, etc.)
throw new Error(
`Owner & repo could not be extracted from remote URL: ${remoteUrl}`,
);
}
return { owner: match[1], repo: match[2] };
let parsedUrl: URL;
try {
parsedUrl = new URL(urlToParse, 'https://github.com');
} catch {
throw new Error(
`Owner & repo could not be extracted from remote URL: ${remoteUrl}`,
);
}
if (parsedUrl.host !== 'github.com') {
throw new Error(
`Owner & repo could not be extracted from remote URL: ${remoteUrl}`,
);
}
const parts = parsedUrl.pathname.split('/').filter((part) => part !== '');
if (parts.length !== 2 || !parts[0] || !parts[1]) {
throw new Error(
`Owner & repo could not be extracted from remote URL: ${remoteUrl}`,
);
}
return { owner: parts[0], repo: parts[1].replace(/\.git$/, '') };
}

View File

@@ -102,6 +102,15 @@ import {
} from '../services/sessionService.js';
import { randomUUID } from 'node:crypto';
// Models
import {
ModelSelectionManager,
type ModelProvidersConfig,
type AvailableModel,
type ResolvedModelConfig,
SelectionSource,
} from '../models/index.js';
// Re-export types
export type { AnyToolInvocation, FileFilteringOptions, MCPOAuthConfig };
export {
@@ -351,6 +360,8 @@ export interface ConfigParameters {
sdkMode?: boolean;
sessionSubagents?: SubagentConfig[];
channel?: string;
/** Model providers configuration grouped by authType */
modelProvidersConfig?: ModelProvidersConfig;
}
function normalizeConfigOutputFormat(
@@ -490,6 +501,10 @@ export class Config {
private readonly useSmartEdit: boolean;
private readonly channel: string | undefined;
// Model selection manager (ModelRegistry is internal to it)
private modelSelectionManager?: ModelSelectionManager;
private readonly modelProvidersConfig?: ModelProvidersConfig;
constructor(params: ConfigParameters) {
this.sessionId = params.sessionId ?? randomUUID();
this.sessionData = params.sessionData;
@@ -609,6 +624,7 @@ export class Config {
this.vlmSwitchMode = params.vlmSwitchMode;
this.inputFormat = params.inputFormat ?? InputFormat.TEXT;
this.fileExclusions = new FileExclusions(this);
this.modelProvidersConfig = params.modelProvidersConfig;
this.eventEmitter = params.eventEmitter;
if (params.contextFileName) {
setGeminiMdFilename(params.contextFileName);
@@ -777,13 +793,111 @@ export class Config {
async setModel(
newModel: string,
_metadata?: { reason?: string; context?: string },
metadata?: { reason?: string; context?: string },
): Promise<void> {
if (this.contentGeneratorConfig) {
this.contentGeneratorConfig.model = newModel;
const manager = this.getModelSelectionManager();
await manager.switchModel(
newModel,
SelectionSource.PROGRAMMATIC_OVERRIDE,
metadata,
);
}
/**
* Get or lazily initialize the ModelSelectionManager.
* This is the single entry point for all model-related operations.
*/
getModelSelectionManager(): ModelSelectionManager {
if (!this.modelSelectionManager) {
const currentAuthType = this.contentGeneratorConfig?.authType;
const currentModelId = this.contentGeneratorConfig?.model;
this.modelSelectionManager = new ModelSelectionManager({
initialAuthType: currentAuthType,
initialModelId: currentModelId,
onModelChange: this.handleModelChange.bind(this),
modelProvidersConfig: this.modelProvidersConfig,
});
}
// TODO: Log _metadata for telemetry if needed
// This _metadata can be used for tracking model switches (reason, context)
return this.modelSelectionManager;
}
/**
* Handle model change from the selection manager.
* This updates the content generator config with the new model settings.
*/
private async handleModelChange(
authType: AuthType,
model: ResolvedModelConfig,
): Promise<void> {
if (!this.contentGeneratorConfig) {
return;
}
this._generationConfig.model = model.id;
// Read API key from environment variable if envKey is specified
if (model.envKey !== undefined) {
const apiKey = process.env[model.envKey];
if (apiKey) {
this._generationConfig.apiKey = apiKey;
} else {
console.warn(
`[Config] Environment variable '${model.envKey}' is not set for model '${model.id}'. ` +
`API key will not be available.`,
);
}
}
if (model.baseUrl !== undefined) {
this._generationConfig.baseUrl = model.baseUrl;
}
if (model.generationConfig) {
this._generationConfig.samplingParams = {
temperature: model.generationConfig.temperature,
top_p: model.generationConfig.top_p,
top_k: model.generationConfig.top_k,
max_tokens: model.generationConfig.max_tokens,
presence_penalty: model.generationConfig.presence_penalty,
frequency_penalty: model.generationConfig.frequency_penalty,
repetition_penalty: model.generationConfig.repetition_penalty,
};
if (model.generationConfig.timeout !== undefined) {
this._generationConfig.timeout = model.generationConfig.timeout;
}
if (model.generationConfig.maxRetries !== undefined) {
this._generationConfig.maxRetries = model.generationConfig.maxRetries;
}
if (model.generationConfig.disableCacheControl !== undefined) {
this._generationConfig.disableCacheControl =
model.generationConfig.disableCacheControl;
}
}
await this.refreshAuth(authType);
}
/**
* Get available models for the current authType.
* This is used by the /model command and ModelDialog.
*/
getAvailableModels(): AvailableModel[] {
return this.getModelSelectionManager().getAvailableModels();
}
/**
* Switch to a different model within the current authType.
* @param modelId - The model ID to switch to
* @param metadata - Optional metadata for telemetry
*/
async switchModel(
modelId: string,
metadata?: { reason?: string; context?: string },
): Promise<void> {
const manager = this.getModelSelectionManager();
await manager.switchModel(modelId, SelectionSource.USER_MANUAL, metadata);
}
isInFallbackMode(): boolean {

View File

@@ -11,7 +11,8 @@ import fs from 'node:fs';
vi.mock('node:fs');
describe('Flash Model Fallback Configuration', () => {
// Skip this test because we do not have fall back mechanism.
describe.skip('Flash Model Fallback Configuration', () => {
let config: Config;
beforeEach(() => {

View File

@@ -76,6 +76,8 @@ export type ContentGeneratorConfig = {
};
proxy?: string | undefined;
userAgent?: string;
// Schema compliance mode for tool definitions
schemaCompliance?: 'auto' | 'openapi_30';
};
export function createContentGeneratorConfig(

View File

@@ -22,6 +22,10 @@ import { GenerateContentResponse, FinishReason } from '@google/genai';
import type OpenAI from 'openai';
import { safeJsonParse } from '../../utils/safeJsonParse.js';
import { StreamingToolCallParser } from './streamingToolCallParser.js';
import {
convertSchema,
type SchemaComplianceMode,
} from '../../utils/schemaConverter.js';
/**
* Extended usage type that supports both OpenAI standard format and alternative formats
@@ -80,11 +84,13 @@ interface ParsedParts {
*/
export class OpenAIContentConverter {
private model: string;
private schemaCompliance: SchemaComplianceMode;
private streamingToolCallParser: StreamingToolCallParser =
new StreamingToolCallParser();
constructor(model: string) {
constructor(model: string, schemaCompliance: SchemaComplianceMode = 'auto') {
this.model = model;
this.schemaCompliance = schemaCompliance;
}
/**
@@ -205,6 +211,10 @@ export class OpenAIContentConverter {
);
}
if (parameters) {
parameters = convertSchema(parameters, this.schemaCompliance);
}
openAITools.push({
type: 'function',
function: {

View File

@@ -108,7 +108,10 @@ describe('ContentGenerationPipeline', () => {
describe('constructor', () => {
it('should initialize with correct configuration', () => {
expect(mockProvider.buildClient).toHaveBeenCalled();
expect(OpenAIContentConverter).toHaveBeenCalledWith('test-model');
expect(OpenAIContentConverter).toHaveBeenCalledWith(
'test-model',
undefined,
);
});
});

View File

@@ -34,6 +34,7 @@ export class ContentGenerationPipeline {
this.client = this.config.provider.buildClient();
this.converter = new OpenAIContentConverter(
this.contentGeneratorConfig.model,
this.contentGeneratorConfig.schemaCompliance,
);
}

View File

@@ -9,6 +9,25 @@ export * from './config/config.js';
export * from './output/types.js';
export * from './output/json-formatter.js';
// Export models
export {
type ModelCapabilities,
type ModelGenerationConfig,
type ModelConfig as ProviderModelConfig,
type ModelProvidersConfig,
type ResolvedModelConfig,
type AvailableModel,
type ModelSwitchMetadata,
type CurrentModelInfo,
SelectionSource,
DEFAULT_GENERATION_CONFIG,
DEFAULT_BASE_URLS,
QWEN_OAUTH_MODELS,
ModelSelectionManager,
type ModelChangeCallback,
type ModelSelectionManagerOptions,
} from './models/index.js';
// Export Core Logic
export * from './core/client.js';
export * from './core/contentGenerator.js';

View File

@@ -0,0 +1,27 @@
/**
* @license
* Copyright 2025 Qwen Team
* SPDX-License-Identifier: Apache-2.0
*/
export {
type ModelCapabilities,
type ModelGenerationConfig,
type ModelConfig,
type ModelProvidersConfig,
type ResolvedModelConfig,
type AvailableModel,
type ModelSwitchMetadata,
type CurrentModelInfo,
SelectionSource,
DEFAULT_GENERATION_CONFIG,
DEFAULT_BASE_URLS,
} from './types.js';
export { QWEN_OAUTH_MODELS } from './modelRegistry.js';
export {
ModelSelectionManager,
type ModelChangeCallback,
type ModelSelectionManagerOptions,
} from './modelSelectionManager.js';

View File

@@ -0,0 +1,336 @@
/**
* @license
* Copyright 2025 Qwen Team
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect, beforeEach } from 'vitest';
import { ModelRegistry, QWEN_OAUTH_MODELS } from './modelRegistry.js';
import { AuthType } from '../core/contentGenerator.js';
import type { ModelProvidersConfig } from './types.js';
describe('ModelRegistry', () => {
describe('initialization', () => {
it('should always include hard-coded qwen-oauth models', () => {
const registry = new ModelRegistry();
const qwenModels = registry.getModelsForAuthType(AuthType.QWEN_OAUTH);
expect(qwenModels.length).toBe(QWEN_OAUTH_MODELS.length);
expect(qwenModels[0].id).toBe('coder-model');
expect(qwenModels[1].id).toBe('vision-model');
});
it('should initialize with empty config', () => {
const registry = new ModelRegistry();
expect(registry.hasAuthType(AuthType.QWEN_OAUTH)).toBe(true);
expect(registry.hasAuthType(AuthType.USE_OPENAI)).toBe(false);
});
it('should initialize with custom models config', () => {
const modelProvidersConfig: ModelProvidersConfig = {
openai: [
{
id: 'gpt-4-turbo',
name: 'GPT-4 Turbo',
baseUrl: 'https://api.openai.com/v1',
},
],
};
const registry = new ModelRegistry(modelProvidersConfig);
expect(registry.hasAuthType(AuthType.USE_OPENAI)).toBe(true);
const openaiModels = registry.getModelsForAuthType(AuthType.USE_OPENAI);
expect(openaiModels.length).toBe(1);
expect(openaiModels[0].id).toBe('gpt-4-turbo');
});
it('should ignore qwen-oauth models in config (hard-coded)', () => {
const modelProvidersConfig: ModelProvidersConfig = {
'qwen-oauth': [
{
id: 'custom-qwen',
name: 'Custom Qwen',
},
],
};
const registry = new ModelRegistry(modelProvidersConfig);
// Should still use hard-coded qwen-oauth models
const qwenModels = registry.getModelsForAuthType(AuthType.QWEN_OAUTH);
expect(qwenModels.length).toBe(QWEN_OAUTH_MODELS.length);
expect(qwenModels.find((m) => m.id === 'custom-qwen')).toBeUndefined();
});
});
describe('getModelsForAuthType', () => {
let registry: ModelRegistry;
beforeEach(() => {
const modelProvidersConfig: ModelProvidersConfig = {
openai: [
{
id: 'gpt-4-turbo',
name: 'GPT-4 Turbo',
description: 'Most capable GPT-4',
baseUrl: 'https://api.openai.com/v1',
capabilities: { vision: true },
},
{
id: 'gpt-3.5-turbo',
name: 'GPT-3.5 Turbo',
capabilities: { vision: false },
},
],
};
registry = new ModelRegistry(modelProvidersConfig);
});
it('should return models for existing authType', () => {
const models = registry.getModelsForAuthType(AuthType.USE_OPENAI);
expect(models.length).toBe(2);
});
it('should return empty array for non-existent authType', () => {
const models = registry.getModelsForAuthType(AuthType.USE_VERTEX_AI);
expect(models.length).toBe(0);
});
it('should return AvailableModel format with correct fields', () => {
const models = registry.getModelsForAuthType(AuthType.USE_OPENAI);
const gpt4 = models.find((m) => m.id === 'gpt-4-turbo');
expect(gpt4).toBeDefined();
expect(gpt4?.label).toBe('GPT-4 Turbo');
expect(gpt4?.description).toBe('Most capable GPT-4');
expect(gpt4?.isVision).toBe(true);
expect(gpt4?.authType).toBe(AuthType.USE_OPENAI);
});
});
describe('getModel', () => {
let registry: ModelRegistry;
beforeEach(() => {
const modelProvidersConfig: ModelProvidersConfig = {
openai: [
{
id: 'gpt-4-turbo',
name: 'GPT-4 Turbo',
baseUrl: 'https://api.openai.com/v1',
generationConfig: {
temperature: 0.8,
max_tokens: 4096,
},
},
],
};
registry = new ModelRegistry(modelProvidersConfig);
});
it('should return resolved model config', () => {
const model = registry.getModel(AuthType.USE_OPENAI, 'gpt-4-turbo');
expect(model).toBeDefined();
expect(model?.id).toBe('gpt-4-turbo');
expect(model?.name).toBe('GPT-4 Turbo');
expect(model?.authType).toBe(AuthType.USE_OPENAI);
expect(model?.baseUrl).toBe('https://api.openai.com/v1');
});
it('should merge generationConfig with defaults', () => {
const model = registry.getModel(AuthType.USE_OPENAI, 'gpt-4-turbo');
expect(model?.generationConfig.temperature).toBe(0.8);
expect(model?.generationConfig.max_tokens).toBe(4096);
// Default values should be applied
expect(model?.generationConfig.top_p).toBe(0.9);
expect(model?.generationConfig.timeout).toBe(60000);
});
it('should return undefined for non-existent model', () => {
const model = registry.getModel(AuthType.USE_OPENAI, 'non-existent');
expect(model).toBeUndefined();
});
it('should return undefined for non-existent authType', () => {
const model = registry.getModel(AuthType.USE_VERTEX_AI, 'some-model');
expect(model).toBeUndefined();
});
});
describe('hasModel', () => {
let registry: ModelRegistry;
beforeEach(() => {
registry = new ModelRegistry({
openai: [{ id: 'gpt-4', name: 'GPT-4' }],
});
});
it('should return true for existing model', () => {
expect(registry.hasModel(AuthType.USE_OPENAI, 'gpt-4')).toBe(true);
});
it('should return false for non-existent model', () => {
expect(registry.hasModel(AuthType.USE_OPENAI, 'non-existent')).toBe(
false,
);
});
it('should return false for non-existent authType', () => {
expect(registry.hasModel(AuthType.USE_VERTEX_AI, 'gpt-4')).toBe(false);
});
});
describe('getFirstModelForAuthType', () => {
it('should return first model for authType', () => {
const registry = new ModelRegistry({
openai: [
{ id: 'first', name: 'First' },
{ id: 'second', name: 'Second' },
],
});
const firstModel = registry.getFirstModelForAuthType(AuthType.USE_OPENAI);
expect(firstModel?.id).toBe('first');
});
it('should return undefined for empty authType', () => {
const registry = new ModelRegistry();
const firstModel = registry.getFirstModelForAuthType(AuthType.USE_OPENAI);
expect(firstModel).toBeUndefined();
});
});
describe('getDefaultModelForAuthType', () => {
it('should return coder-model for qwen-oauth', () => {
const registry = new ModelRegistry();
const defaultModel = registry.getDefaultModelForAuthType(
AuthType.QWEN_OAUTH,
);
expect(defaultModel?.id).toBe('coder-model');
});
it('should return first model for other authTypes', () => {
const registry = new ModelRegistry({
openai: [
{ id: 'gpt-4', name: 'GPT-4' },
{ id: 'gpt-3.5', name: 'GPT-3.5' },
],
});
const defaultModel = registry.getDefaultModelForAuthType(
AuthType.USE_OPENAI,
);
expect(defaultModel?.id).toBe('gpt-4');
});
});
describe('getAvailableAuthTypes', () => {
it('should return all configured authTypes', () => {
const registry = new ModelRegistry({
openai: [{ id: 'gpt-4', name: 'GPT-4' }],
});
const authTypes = registry.getAvailableAuthTypes();
expect(authTypes).toContain(AuthType.QWEN_OAUTH);
expect(authTypes).toContain(AuthType.USE_OPENAI);
});
});
describe('validation', () => {
it('should throw error for model without id', () => {
expect(
() =>
new ModelRegistry({
openai: [{ id: '', name: 'No ID' }],
}),
).toThrow('missing required field: id');
});
});
describe('default base URLs', () => {
it('should apply default dashscope URL for qwen-oauth', () => {
const registry = new ModelRegistry();
const model = registry.getModel(AuthType.QWEN_OAUTH, 'coder-model');
expect(model?.baseUrl).toBe(
'https://dashscope.aliyuncs.com/compatible-mode/v1',
);
});
it('should apply default openai URL when not specified', () => {
const registry = new ModelRegistry({
openai: [{ id: 'gpt-4', name: 'GPT-4' }],
});
const model = registry.getModel(AuthType.USE_OPENAI, 'gpt-4');
expect(model?.baseUrl).toBe('https://api.openai.com/v1');
});
it('should use custom baseUrl when specified', () => {
const registry = new ModelRegistry({
openai: [
{
id: 'deepseek',
name: 'DeepSeek',
baseUrl: 'https://api.deepseek.com/v1',
},
],
});
const model = registry.getModel(AuthType.USE_OPENAI, 'deepseek');
expect(model?.baseUrl).toBe('https://api.deepseek.com/v1');
});
});
describe('findAuthTypesForModel', () => {
it('should return empty array for non-existent model', () => {
const registry = new ModelRegistry();
const authTypes = registry.findAuthTypesForModel('non-existent');
expect(authTypes).toEqual([]);
});
it('should return authTypes that have the model', () => {
const registry = new ModelRegistry({
openai: [{ id: 'gpt-4', name: 'GPT-4' }],
});
const authTypes = registry.findAuthTypesForModel('gpt-4');
expect(authTypes).toContain(AuthType.USE_OPENAI);
expect(authTypes.length).toBe(1);
});
it('should return multiple authTypes if model exists in multiple', () => {
const registry = new ModelRegistry({
openai: [{ id: 'shared-model', name: 'Shared' }],
'gemini-api-key': [{ id: 'shared-model', name: 'Shared Gemini' }],
});
const authTypes = registry.findAuthTypesForModel('shared-model');
expect(authTypes.length).toBe(2);
expect(authTypes).toContain(AuthType.USE_OPENAI);
expect(authTypes).toContain(AuthType.USE_GEMINI);
});
it('should prioritize preferred authType in results', () => {
const registry = new ModelRegistry({
openai: [{ id: 'shared-model', name: 'Shared' }],
'gemini-api-key': [{ id: 'shared-model', name: 'Shared Gemini' }],
});
const authTypes = registry.findAuthTypesForModel(
'shared-model',
AuthType.USE_GEMINI,
);
expect(authTypes[0]).toBe(AuthType.USE_GEMINI);
});
it('should handle qwen-oauth models', () => {
const registry = new ModelRegistry();
const authTypes = registry.findAuthTypesForModel('coder-model');
expect(authTypes).toContain(AuthType.QWEN_OAUTH);
});
});
});

View File

@@ -0,0 +1,268 @@
/**
* @license
* Copyright 2025 Qwen Team
* SPDX-License-Identifier: Apache-2.0
*/
import { AuthType } from '../core/contentGenerator.js';
import {
type ModelConfig,
type ModelProvidersConfig,
type ResolvedModelConfig,
type AvailableModel,
type ModelGenerationConfig,
DEFAULT_GENERATION_CONFIG,
DEFAULT_BASE_URLS,
} from './types.js';
import { DEFAULT_QWEN_MODEL } from '../config/models.js';
/**
* Hard-coded Qwen OAuth models that are always available.
* These cannot be overridden by user configuration.
*/
export const QWEN_OAUTH_MODELS: ModelConfig[] = [
{
id: 'coder-model',
name: 'Qwen Coder',
description:
'The latest Qwen Coder model from Alibaba Cloud ModelStudio (version: qwen3-coder-plus-2025-09-23)',
capabilities: { vision: false },
generationConfig: {
temperature: 0.7,
top_p: 0.9,
max_tokens: 8192,
timeout: 60000,
maxRetries: 3,
},
},
{
id: 'vision-model',
name: 'Qwen Vision',
description:
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)',
capabilities: { vision: true },
generationConfig: {
temperature: 0.7,
top_p: 0.9,
max_tokens: 8192,
timeout: 60000,
maxRetries: 3,
},
},
];
/**
* Central registry for managing model configurations.
* Models are organized by authType.
*/
export class ModelRegistry {
private modelsByAuthType: Map<AuthType, Map<string, ResolvedModelConfig>>;
// Reverse index for O(1) model lookups: modelId -> authTypes[]
private modelIdToAuthTypes: Map<string, AuthType[]>;
constructor(modelProvidersConfig?: ModelProvidersConfig) {
this.modelsByAuthType = new Map();
this.modelIdToAuthTypes = new Map();
// Always register qwen-oauth models (hard-coded, cannot be overridden)
this.registerAuthTypeModels(AuthType.QWEN_OAUTH, QWEN_OAUTH_MODELS);
// Register user-configured models for other authTypes
if (modelProvidersConfig) {
for (const [authType, models] of Object.entries(modelProvidersConfig)) {
// Skip qwen-oauth as it uses hard-coded models
if (authType === AuthType.QWEN_OAUTH) {
continue;
}
const authTypeEnum = authType as AuthType;
this.registerAuthTypeModels(authTypeEnum, models);
}
}
}
/**
* Register models for an authType
*/
private registerAuthTypeModels(
authType: AuthType,
models: ModelConfig[],
): void {
const modelMap = new Map<string, ResolvedModelConfig>();
for (const config of models) {
const resolved = this.resolveModelConfig(config, authType);
modelMap.set(config.id, resolved);
// Update reverse index
const existingAuthTypes = this.modelIdToAuthTypes.get(config.id) || [];
existingAuthTypes.push(authType);
this.modelIdToAuthTypes.set(config.id, existingAuthTypes);
}
this.modelsByAuthType.set(authType, modelMap);
}
/**
* Get all models for a specific authType.
* This is used by /model command to show only relevant models.
*/
getModelsForAuthType(authType: AuthType): AvailableModel[] {
const models = this.modelsByAuthType.get(authType);
if (!models) return [];
return Array.from(models.values()).map((model) => ({
id: model.id,
label: model.name,
description: model.description,
capabilities: model.capabilities,
authType: model.authType,
isVision: model.capabilities?.vision ?? false,
}));
}
/**
* Get all available authTypes that have models configured
*/
getAvailableAuthTypes(): AuthType[] {
return Array.from(this.modelsByAuthType.keys());
}
/**
* Get model configuration by authType and modelId
*/
getModel(
authType: AuthType,
modelId: string,
): ResolvedModelConfig | undefined {
const models = this.modelsByAuthType.get(authType);
return models?.get(modelId);
}
/**
* Check if model exists for given authType
*/
hasModel(authType: AuthType, modelId: string): boolean {
const models = this.modelsByAuthType.get(authType);
return models?.has(modelId) ?? false;
}
/**
* Get first model for an authType (used as default)
*/
getFirstModelForAuthType(
authType: AuthType,
): ResolvedModelConfig | undefined {
const models = this.modelsByAuthType.get(authType);
if (!models || models.size === 0) return undefined;
return Array.from(models.values())[0];
}
/**
* Get default model for an authType.
* For qwen-oauth, returns the coder model.
* For others, returns the first configured model.
*/
getDefaultModelForAuthType(
authType: AuthType,
): ResolvedModelConfig | undefined {
if (authType === AuthType.QWEN_OAUTH) {
return this.getModel(authType, DEFAULT_QWEN_MODEL);
}
return this.getFirstModelForAuthType(authType);
}
/**
* Resolve model config by applying defaults
*/
private resolveModelConfig(
config: ModelConfig,
authType: AuthType,
): ResolvedModelConfig {
this.validateModelConfig(config, authType);
const defaultBaseUrl = DEFAULT_BASE_URLS[authType] || '';
return {
...config,
authType,
name: config.name || config.id,
baseUrl: config.baseUrl || defaultBaseUrl,
generationConfig: this.mergeGenerationConfig(config.generationConfig),
capabilities: config.capabilities || {},
};
}
/**
* Merge generation config with defaults
*/
private mergeGenerationConfig(
config?: ModelGenerationConfig,
): ModelGenerationConfig {
if (!config) {
return { ...DEFAULT_GENERATION_CONFIG };
}
return {
...DEFAULT_GENERATION_CONFIG,
...config,
};
}
/**
* Validate model configuration
*/
private validateModelConfig(config: ModelConfig, authType: AuthType): void {
if (!config.id) {
throw new Error(
`Model config in authType '${authType}' missing required field: id`,
);
}
}
/**
* Check if the registry has any models for a given authType
*/
hasAuthType(authType: AuthType): boolean {
const models = this.modelsByAuthType.get(authType);
return models !== undefined && models.size > 0;
}
/**
* Get total number of models across all authTypes
*/
getTotalModelCount(): number {
let count = 0;
for (const models of this.modelsByAuthType.values()) {
count += models.size;
}
return count;
}
/**
* Find all authTypes that have a model with the given modelId.
* Uses reverse index for O(1) lookup.
* Returns empty array if model doesn't exist.
*
* @param modelId - The model ID to search for
* @param preferredAuthType - Optional authType to prioritize in results
* @returns Array of authTypes that have this model (preferred authType first if found)
*/
findAuthTypesForModel(
modelId: string,
preferredAuthType?: AuthType,
): AuthType[] {
const authTypes = this.modelIdToAuthTypes.get(modelId) || [];
// If no preferred authType or it's not in the list, return as-is
if (!preferredAuthType || !authTypes.includes(preferredAuthType)) {
return authTypes;
}
// Move preferred authType to front
return [
preferredAuthType,
...authTypes.filter((at) => at !== preferredAuthType),
];
}
}

View File

@@ -0,0 +1,235 @@
/**
* @license
* Copyright 2025 Qwen Team
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect, beforeEach, vi } from 'vitest';
import { ModelSelectionManager } from './modelSelectionManager.js';
import { AuthType } from '../core/contentGenerator.js';
import { SelectionSource } from './types.js';
import type { ModelProvidersConfig } from './types.js';
describe('ModelSelectionManager', () => {
let manager: ModelSelectionManager;
const defaultConfig: ModelProvidersConfig = {
openai: [
{
id: 'gpt-4-turbo',
name: 'GPT-4 Turbo',
baseUrl: 'https://api.openai.com/v1',
},
{
id: 'gpt-3.5-turbo',
name: 'GPT-3.5 Turbo',
baseUrl: 'https://api.openai.com/v1',
},
{
id: 'deepseek-coder',
name: 'DeepSeek Coder',
baseUrl: 'https://api.deepseek.com/v1',
},
],
};
describe('initialization', () => {
it('should initialize with default qwen-oauth authType and coder-model', () => {
manager = new ModelSelectionManager({
modelProvidersConfig: defaultConfig,
});
expect(manager.getCurrentAuthType()).toBe(AuthType.QWEN_OAUTH);
expect(manager.getCurrentModelId()).toBe('coder-model');
expect(manager.getSelectionSource()).toBe(SelectionSource.DEFAULT);
});
it('should initialize with specified authType and model', () => {
manager = new ModelSelectionManager({
modelProvidersConfig: defaultConfig,
initialAuthType: AuthType.USE_OPENAI,
initialModelId: 'gpt-4-turbo',
});
expect(manager.getCurrentAuthType()).toBe(AuthType.USE_OPENAI);
expect(manager.getCurrentModelId()).toBe('gpt-4-turbo');
expect(manager.getSelectionSource()).toBe(SelectionSource.SETTINGS);
});
it('should fallback to default model if specified model not found', () => {
manager = new ModelSelectionManager({
modelProvidersConfig: defaultConfig,
initialAuthType: AuthType.USE_OPENAI,
initialModelId: 'non-existent',
});
expect(manager.getCurrentAuthType()).toBe(AuthType.USE_OPENAI);
// Should fallback to first model
expect(manager.getCurrentModelId()).toBe('gpt-4-turbo');
});
});
describe('switchModel', () => {
beforeEach(() => {
manager = new ModelSelectionManager({
modelProvidersConfig: defaultConfig,
initialAuthType: AuthType.USE_OPENAI,
initialModelId: 'gpt-4-turbo',
});
});
it('should switch model within same authType', async () => {
await manager.switchModel('gpt-3.5-turbo', SelectionSource.USER_MANUAL);
expect(manager.getCurrentModelId()).toBe('gpt-3.5-turbo');
expect(manager.getCurrentAuthType()).toBe(AuthType.USE_OPENAI);
});
it('should update selection source on switch', async () => {
await manager.switchModel('gpt-3.5-turbo', SelectionSource.USER_MANUAL);
expect(manager.getSelectionSource()).toBe(SelectionSource.USER_MANUAL);
});
it('should call onModelChange callback', async () => {
const onModelChange = vi.fn();
manager.setOnModelChange(onModelChange);
await manager.switchModel('gpt-3.5-turbo', SelectionSource.USER_MANUAL);
expect(onModelChange).toHaveBeenCalledTimes(1);
expect(onModelChange).toHaveBeenCalledWith(
AuthType.USE_OPENAI,
expect.objectContaining({ id: 'gpt-3.5-turbo' }),
);
});
it('should throw error for non-existent model', async () => {
await expect(
manager.switchModel('non-existent', SelectionSource.USER_MANUAL),
).rejects.toThrow('not found for authType');
});
it('should allow any source to override previous selection', async () => {
// First set to USER_MANUAL
await manager.switchModel('gpt-3.5-turbo', SelectionSource.USER_MANUAL);
expect(manager.getCurrentModelId()).toBe('gpt-3.5-turbo');
// Should allow PROGRAMMATIC_OVERRIDE to override USER_MANUAL
await manager.switchModel(
'gpt-4-turbo',
SelectionSource.PROGRAMMATIC_OVERRIDE,
);
expect(manager.getCurrentModelId()).toBe('gpt-4-turbo');
// Should allow SETTINGS to override PROGRAMMATIC_OVERRIDE
await manager.switchModel('gpt-3.5-turbo', SelectionSource.SETTINGS);
expect(manager.getCurrentModelId()).toBe('gpt-3.5-turbo');
});
});
describe('getAvailableModels', () => {
it('should return models for current authType', () => {
manager = new ModelSelectionManager({
modelProvidersConfig: defaultConfig,
initialAuthType: AuthType.USE_OPENAI,
});
const models = manager.getAvailableModels();
expect(models.length).toBe(3);
expect(models.map((m) => m.id)).toContain('gpt-4-turbo');
});
it('should return qwen-oauth models by default', () => {
manager = new ModelSelectionManager({
modelProvidersConfig: defaultConfig,
});
const models = manager.getAvailableModels();
expect(models.some((m) => m.id === 'coder-model')).toBe(true);
expect(models.some((m) => m.id === 'vision-model')).toBe(true);
});
});
describe('getAvailableAuthTypes', () => {
it('should return all available authTypes', () => {
manager = new ModelSelectionManager({
modelProvidersConfig: defaultConfig,
});
const authTypes = manager.getAvailableAuthTypes();
expect(authTypes).toContain(AuthType.QWEN_OAUTH);
expect(authTypes).toContain(AuthType.USE_OPENAI);
});
});
describe('getCurrentModel', () => {
beforeEach(() => {
manager = new ModelSelectionManager({
modelProvidersConfig: defaultConfig,
initialAuthType: AuthType.USE_OPENAI,
initialModelId: 'gpt-4-turbo',
});
});
it('should return current model info', () => {
const modelInfo = manager.getCurrentModel();
expect(modelInfo.authType).toBe(AuthType.USE_OPENAI);
expect(modelInfo.modelId).toBe('gpt-4-turbo');
expect(modelInfo.model.id).toBe('gpt-4-turbo');
expect(modelInfo.selectionSource).toBe(SelectionSource.SETTINGS);
});
it('should throw error if no model selected', () => {
// Create manager with invalid initial state
const mgr = new ModelSelectionManager({
modelProvidersConfig: { openai: [] },
initialAuthType: AuthType.USE_OPENAI,
});
expect(() => mgr.getCurrentModel()).toThrow('No model selected');
});
});
describe('selection timestamp', () => {
it('should update timestamp on model switch', async () => {
manager = new ModelSelectionManager({
modelProvidersConfig: defaultConfig,
initialAuthType: AuthType.USE_OPENAI,
initialModelId: 'gpt-4-turbo',
});
const initialTimestamp = manager.getSelectionTimestamp();
// Wait a small amount to ensure timestamp changes
await new Promise((resolve) => setTimeout(resolve, 10));
await manager.switchModel('gpt-3.5-turbo', SelectionSource.USER_MANUAL);
expect(manager.getSelectionTimestamp()).toBeGreaterThan(initialTimestamp);
});
});
describe('delegation methods', () => {
beforeEach(() => {
manager = new ModelSelectionManager({
modelProvidersConfig: defaultConfig,
});
});
it('should delegate hasModel to registry', () => {
expect(manager.hasModel(AuthType.QWEN_OAUTH, 'coder-model')).toBe(true);
expect(manager.hasModel(AuthType.QWEN_OAUTH, 'non-existent')).toBe(false);
});
it('should delegate getModel to registry', () => {
const model = manager.getModel(AuthType.QWEN_OAUTH, 'coder-model');
expect(model).toBeDefined();
expect(model?.id).toBe('coder-model');
const nonExistent = manager.getModel(AuthType.QWEN_OAUTH, 'non-existent');
expect(nonExistent).toBeUndefined();
});
});
});

View File

@@ -0,0 +1,251 @@
/**
* @license
* Copyright 2025 Qwen Team
* SPDX-License-Identifier: Apache-2.0
*/
import { AuthType } from '../core/contentGenerator.js';
import { ModelRegistry } from './modelRegistry.js';
import {
type ResolvedModelConfig,
type AvailableModel,
type ModelSwitchMetadata,
type CurrentModelInfo,
type ModelProvidersConfig,
SelectionSource,
} from './types.js';
/**
* Callback type for when the model changes.
* This is used to notify Config to update the ContentGenerator.
*/
export type ModelChangeCallback = (
authType: AuthType,
model: ResolvedModelConfig,
) => Promise<void>;
/**
* Options for initializing the ModelSelectionManager
*/
export interface ModelSelectionManagerOptions {
/** Initial authType from persisted settings */
initialAuthType?: AuthType;
/** Initial model ID from persisted settings */
initialModelId?: string;
/** Callback when model changes */
onModelChange?: ModelChangeCallback;
/** Model providers configuration for creating ModelRegistry */
modelProvidersConfig?: ModelProvidersConfig;
}
/**
* Manages model and auth selection with persistence.
* Two-level selection: authType → model
*/
export class ModelSelectionManager {
private modelRegistry: ModelRegistry;
// Current selection state
private currentAuthType: AuthType;
private currentModelId: string;
// Selection metadata for tracking and observability
private selectionSource: SelectionSource = SelectionSource.DEFAULT;
private selectionTimestamp: number = Date.now();
// Callback for model changes
private onModelChange?: ModelChangeCallback;
constructor(options: ModelSelectionManagerOptions = {}) {
// Create ModelRegistry internally - it's an implementation detail
this.modelRegistry = new ModelRegistry(options.modelProvidersConfig);
this.onModelChange = options.onModelChange;
// Initialize from options or use defaults
this.currentAuthType = options.initialAuthType || AuthType.QWEN_OAUTH;
this.currentModelId = options.initialModelId || '';
// Validate and initialize selection
this.initializeDefaultSelection(options);
}
/**
* Initialize default selection
*/
private initializeDefaultSelection(
_options: ModelSelectionManagerOptions,
): void {
// Check if persisted model selection is valid
if (
this.currentModelId &&
this.modelRegistry.hasModel(this.currentAuthType, this.currentModelId)
) {
this.selectionSource = SelectionSource.SETTINGS;
return;
}
// Check environment variables (backward compatibility)
const envModel = this.getModelFromEnvironment();
if (
envModel &&
this.modelRegistry.hasModel(this.currentAuthType, envModel)
) {
this.currentModelId = envModel;
this.selectionSource = SelectionSource.ENVIRONMENT;
return;
}
// Use registry default (first model for current authType)
const defaultModel = this.modelRegistry.getDefaultModelForAuthType(
this.currentAuthType,
);
if (defaultModel) {
this.currentModelId = defaultModel.id;
this.selectionSource = SelectionSource.DEFAULT;
}
}
/**
* Get model from environment variables (backward compatibility)
*/
private getModelFromEnvironment(): string | undefined {
// Support legacy OPENAI_MODEL env var for openai authType
if (this.currentAuthType === AuthType.USE_OPENAI) {
return process.env['OPENAI_MODEL'];
}
return undefined;
}
/**
* Switch model within current authType.
* This updates model name and generation config.
*/
async switchModel(
modelId: string,
source: SelectionSource,
_metadata?: ModelSwitchMetadata,
): Promise<void> {
// Validate model exists for current authType
const model = this.modelRegistry.getModel(this.currentAuthType, modelId);
if (!model) {
throw new Error(
`Model '${modelId}' not found for authType '${this.currentAuthType}'`,
);
}
// Store previous model for rollback if needed
const previousModelId = this.currentModelId;
try {
// Update selection state
this.currentModelId = modelId;
this.selectionSource = source;
this.selectionTimestamp = Date.now();
// Notify about the change
if (this.onModelChange) {
await this.onModelChange(this.currentAuthType, model);
}
} catch (error) {
// Rollback on error
this.currentModelId = previousModelId;
throw error;
}
}
/**
* Get available models for current authType.
* Used by /model command to show only relevant models.
*/
getAvailableModels(): AvailableModel[] {
return this.modelRegistry.getModelsForAuthType(this.currentAuthType);
}
/**
* Get available authTypes.
* Used by /auth command.
*/
getAvailableAuthTypes(): AuthType[] {
return this.modelRegistry.getAvailableAuthTypes();
}
/**
* Get current authType
*/
getCurrentAuthType(): AuthType {
return this.currentAuthType;
}
/**
* Get current model ID
*/
getCurrentModelId(): string {
return this.currentModelId;
}
/**
* Get current model information
*/
getCurrentModel(): CurrentModelInfo {
if (!this.currentModelId) {
throw new Error('No model selected');
}
const model = this.modelRegistry.getModel(
this.currentAuthType,
this.currentModelId,
);
if (!model) {
throw new Error(
`Current model '${this.currentModelId}' not found for authType '${this.currentAuthType}'`,
);
}
return {
authType: this.currentAuthType,
modelId: this.currentModelId,
model,
selectionSource: this.selectionSource,
};
}
/**
* Check if a model exists for the given authType.
* Delegates to ModelRegistry.
*/
hasModel(authType: AuthType, modelId: string): boolean {
return this.modelRegistry.hasModel(authType, modelId);
}
/**
* Get model configuration by authType and modelId.
* Delegates to ModelRegistry.
*/
getModel(
authType: AuthType,
modelId: string,
): ResolvedModelConfig | undefined {
return this.modelRegistry.getModel(authType, modelId);
}
/**
* Get the current selection source
*/
getSelectionSource(): SelectionSource {
return this.selectionSource;
}
/**
* Get the timestamp of when the current selection was made
*/
getSelectionTimestamp(): number {
return this.selectionTimestamp;
}
/**
* Update the onModelChange callback
*/
setOnModelChange(callback: ModelChangeCallback): void {
this.onModelChange = callback;
}
}

View File

@@ -0,0 +1,154 @@
/**
* @license
* Copyright 2025 Qwen Team
* SPDX-License-Identifier: Apache-2.0
*/
import type { AuthType } from '../core/contentGenerator.js';
/**
* Model capabilities configuration
*/
export interface ModelCapabilities {
/** Supports image/vision inputs */
vision?: boolean;
}
/**
* Generation configuration for model sampling parameters
*/
export interface ModelGenerationConfig {
/** Temperature for sampling (0.0 - 2.0) */
temperature?: number;
/** Top-p for nucleus sampling (0.0 - 1.0) */
top_p?: number;
/** Top-k for sampling */
top_k?: number;
/** Maximum output tokens */
max_tokens?: number;
/** Presence penalty (-2.0 - 2.0) */
presence_penalty?: number;
/** Frequency penalty (-2.0 - 2.0) */
frequency_penalty?: number;
/** Repetition penalty (provider-specific) */
repetition_penalty?: number;
/** Request timeout in milliseconds */
timeout?: number;
/** Maximum retry attempts */
maxRetries?: number;
/** Disable cache control for DashScope providers */
disableCacheControl?: boolean;
}
/**
* Model configuration for a single model within an authType
*/
export interface ModelConfig {
/** Unique model ID within authType (e.g., "qwen-coder", "gpt-4-turbo") */
id: string;
/** Display name (defaults to id) */
name?: string;
/** Model description */
description?: string;
/** Environment variable name to read API key from (e.g., "OPENAI_API_KEY") */
envKey?: string;
/** API endpoint override */
baseUrl?: string;
/** Model capabilities */
capabilities?: ModelCapabilities;
/** Generation configuration (sampling parameters) */
generationConfig?: ModelGenerationConfig;
}
/**
* Model providers configuration grouped by authType
*/
export type ModelProvidersConfig = {
[authType: string]: ModelConfig[];
};
/**
* Resolved model config with all defaults applied
*/
export interface ResolvedModelConfig extends ModelConfig {
/** AuthType this model belongs to (always present from map key) */
authType: AuthType;
/** Display name (always present, defaults to id) */
name: string;
/** Environment variable name to read API key from (optional, provider-specific) */
envKey?: string;
/** API base URL (always present, has default per authType) */
baseUrl: string;
/** Generation config (always present, merged with defaults) */
generationConfig: ModelGenerationConfig;
/** Capabilities (always present, defaults to {}) */
capabilities: ModelCapabilities;
}
/**
* Model info for UI display
*/
export interface AvailableModel {
id: string;
label: string;
description?: string;
capabilities?: ModelCapabilities;
authType: AuthType;
isVision?: boolean;
}
/**
* Selection source for tracking and observability.
* This tracks how a model was selected but does not enforce any priority rules.
*/
export enum SelectionSource {
/** Default selection (first model in registry) */
DEFAULT = 'default',
/** From environment variables */
ENVIRONMENT = 'environment',
/** From settings.json */
SETTINGS = 'settings',
/** Programmatic override (e.g., VLM auto-switch, control requests) */
PROGRAMMATIC_OVERRIDE = 'programmatic_override',
/** User explicitly switched via /model command */
USER_MANUAL = 'user_manual',
}
/**
* Metadata for model switch operations
*/
export interface ModelSwitchMetadata {
/** Reason for the switch */
reason?: string;
/** Additional context */
context?: string;
}
/**
* Current model information
*/
export interface CurrentModelInfo {
authType: AuthType;
modelId: string;
model: ResolvedModelConfig;
selectionSource: SelectionSource;
}
/**
* Default generation configuration values
*/
export const DEFAULT_GENERATION_CONFIG: ModelGenerationConfig = {
temperature: 0.7,
top_p: 0.9,
max_tokens: 4096,
timeout: 60000,
maxRetries: 3,
};
/**
* Default base URLs per authType
*/
export const DEFAULT_BASE_URLS: Partial<Record<AuthType, string>> = {
'qwen-oauth': 'https://dashscope.aliyuncs.com/compatible-mode/v1',
openai: 'https://api.openai.com/v1',
};

View File

@@ -0,0 +1,118 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect } from 'vitest';
import { convertSchema } from './schemaConverter.js';
describe('convertSchema', () => {
describe('mode: auto (default)', () => {
it('should preserve type arrays', () => {
const input = { type: ['string', 'null'] };
expect(convertSchema(input, 'auto')).toEqual(input);
});
it('should preserve items array (tuples)', () => {
const input = {
type: 'array',
items: [{ type: 'string' }, { type: 'number' }],
};
expect(convertSchema(input, 'auto')).toEqual(input);
});
it('should preserve mixed enums', () => {
const input = { enum: [1, 2, '3'] };
expect(convertSchema(input, 'auto')).toEqual(input);
});
it('should preserve unsupported keywords', () => {
const input = {
$schema: 'http://json-schema.org/draft-07/schema#',
exclusiveMinimum: 10,
type: 'number',
};
expect(convertSchema(input, 'auto')).toEqual(input);
});
});
describe('mode: openapi_30 (strict)', () => {
it('should convert type arrays to nullable', () => {
const input = { type: ['string', 'null'] };
const expected = { type: 'string', nullable: true };
expect(convertSchema(input, 'openapi_30')).toEqual(expected);
});
it('should fallback to first type for non-nullable arrays', () => {
const input = { type: ['string', 'number'] };
const expected = { type: 'string' };
expect(convertSchema(input, 'openapi_30')).toEqual(expected);
});
it('should convert const to enum', () => {
const input = { const: 'foo' };
const expected = { enum: ['foo'] };
expect(convertSchema(input, 'openapi_30')).toEqual(expected);
});
it('should convert exclusiveMinimum number to boolean', () => {
const input = { type: 'number', exclusiveMinimum: 10 };
const expected = {
type: 'number',
minimum: 10,
exclusiveMinimum: true,
};
expect(convertSchema(input, 'openapi_30')).toEqual(expected);
});
it('should convert nested objects recursively', () => {
const input = {
type: 'object',
properties: {
prop1: { type: ['integer', 'null'], exclusiveMaximum: 5 },
},
};
const expected = {
type: 'object',
properties: {
prop1: {
type: 'integer',
nullable: true,
maximum: 5,
exclusiveMaximum: true,
},
},
};
expect(convertSchema(input, 'openapi_30')).toEqual(expected);
});
it('should stringify enums', () => {
const input = { enum: [1, 2, '3'] };
const expected = { enum: ['1', '2', '3'] };
expect(convertSchema(input, 'openapi_30')).toEqual(expected);
});
it('should remove tuple items (array of schemas)', () => {
const input = {
type: 'array',
items: [{ type: 'string' }, { type: 'number' }],
};
const expected = { type: 'array' };
expect(convertSchema(input, 'openapi_30')).toEqual(expected);
});
it('should remove unsupported keywords', () => {
const input = {
$schema: 'http://json-schema.org/draft-07/schema#',
$id: '#foo',
type: 'string',
default: 'bar',
dependencies: { foo: ['bar'] },
patternProperties: { '^foo': { type: 'string' } },
};
const expected = { type: 'string' };
expect(convertSchema(input, 'openapi_30')).toEqual(expected);
});
});
});

View File

@@ -0,0 +1,135 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
/**
* Utility for converting JSON Schemas to be compatible with different LLM providers.
* Specifically focuses on downgrading modern JSON Schema (Draft 7/2020-12) to
* OpenAPI 3.0 compatible Schema Objects, which is required for Google Gemini API.
*/
export type SchemaComplianceMode = 'auto' | 'openapi_30';
/**
* Converts a JSON Schema to be compatible with the specified compliance mode.
*/
export function convertSchema(
schema: Record<string, unknown>,
mode: SchemaComplianceMode = 'auto',
): Record<string, unknown> {
if (mode === 'openapi_30') {
return toOpenAPI30(schema);
}
// Default ('auto') mode now does nothing.
return schema;
}
/**
* Converts Modern JSON Schema to OpenAPI 3.0 Schema Object.
* Attempts to preserve semantics where possible through transformations.
*/
function toOpenAPI30(schema: Record<string, unknown>): Record<string, unknown> {
const convert = (obj: unknown): unknown => {
if (typeof obj !== 'object' || obj === null) {
return obj;
}
if (Array.isArray(obj)) {
return obj.map(convert);
}
const source = obj as Record<string, unknown>;
const target: Record<string, unknown> = {};
// 1. Type Handling
if (Array.isArray(source['type'])) {
const types = source['type'] as string[];
// Handle ["string", "null"] pattern common in modern schemas
if (types.length === 2 && types.includes('null')) {
target['type'] = types.find((t) => t !== 'null');
target['nullable'] = true;
} else {
// Fallback for other unions: take the first non-null type
// OpenAPI 3.0 doesn't support type arrays.
// Ideal fix would be anyOf, but simple fallback is safer for now.
target['type'] = types[0];
}
} else if (source['type'] !== undefined) {
target['type'] = source['type'];
}
// 2. Const Handling (Draft 6+) -> Enum (OpenAPI 3.0)
if (source['const'] !== undefined) {
target['enum'] = [source['const']];
delete target['const'];
}
// 3. Exclusive Limits (Draft 6+ number) -> (Draft 4 boolean)
// exclusiveMinimum: 10 -> minimum: 10, exclusiveMinimum: true
if (typeof source['exclusiveMinimum'] === 'number') {
target['minimum'] = source['exclusiveMinimum'];
target['exclusiveMinimum'] = true;
}
if (typeof source['exclusiveMaximum'] === 'number') {
target['maximum'] = source['exclusiveMaximum'];
target['exclusiveMaximum'] = true;
}
// 4. Array Items (Tuple -> Single Schema)
// OpenAPI 3.0 items must be a schema object, not an array of schemas
if (Array.isArray(source['items'])) {
// Tuple support is tricky.
// Best effort: Use the first item's schema as a generic array type
// or convert to an empty object (any type) if mixed.
// For now, we'll strip it to allow validation to pass (accepts any items)
// This matches the legacy behavior but is explicit.
// Ideally, we could use `oneOf` on the items if we wanted to be stricter.
delete target['items'];
} else if (
typeof source['items'] === 'object' &&
source['items'] !== null
) {
target['items'] = convert(source['items']);
}
// 5. Enum Stringification
// Gemini strictly requires enums to be strings
if (Array.isArray(source['enum'])) {
target['enum'] = source['enum'].map(String);
}
// 6. Recursively process other properties
for (const [key, value] of Object.entries(source)) {
// Skip fields we've already handled or want to remove
if (
key === 'type' ||
key === 'const' ||
key === 'exclusiveMinimum' ||
key === 'exclusiveMaximum' ||
key === 'items' ||
key === 'enum' ||
key === '$schema' ||
key === '$id' ||
key === 'default' || // Optional: Gemini sometimes complains about defaults conflicting with types
key === 'dependencies' ||
key === 'patternProperties'
) {
continue;
}
target[key] = convert(value);
}
// Preserve default if it doesn't conflict (simple pass-through)
// if (source['default'] !== undefined) {
// target['default'] = source['default'];
// }
return target;
};
return convert(schema) as Record<string, unknown>;
}

View File

@@ -48,5 +48,5 @@
}
.assistant-message-container.assistant-message-loading::after {
display: none
display: none;
}

View File

@@ -172,7 +172,8 @@
/* Loading animation for toolcall header */
@keyframes toolcallHeaderPulse {
0%, 100% {
0%,
100% {
opacity: 1;
}
50% {

View File

@@ -51,7 +51,8 @@
.composer-form:focus-within {
/* match existing highlight behavior */
border-color: var(--app-input-highlight);
box-shadow: 0 1px 2px color-mix(in srgb, var(--app-input-highlight), transparent 80%);
box-shadow: 0 1px 2px
color-mix(in srgb, var(--app-input-highlight), transparent 80%);
}
/* Composer: input editable area */
@@ -66,7 +67,7 @@
The data attribute is needed because some browsers insert a <br> in
contentEditable, which breaks :empty matching. */
.composer-input:empty:before,
.composer-input[data-empty="true"]::before {
.composer-input[data-empty='true']::before {
content: attr(data-placeholder);
color: var(--app-input-placeholder-foreground);
pointer-events: none;
@@ -80,7 +81,7 @@
outline: none;
}
.composer-input:disabled,
.composer-input[contenteditable="false"] {
.composer-input[contenteditable='false'] {
color: #999;
cursor: not-allowed;
}