mirror of
https://github.com/QwenLM/qwen-code.git
synced 2026-01-07 17:39:17 +00:00
Compare commits
71 Commits
v0.1.0
...
release/v0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e6d08f0596 | ||
|
|
160b64523e | ||
|
|
0752a31e1e | ||
|
|
b029f0d2ce | ||
|
|
d5d96c726a | ||
|
|
06141cda8d | ||
|
|
22edef0cb9 | ||
|
|
ca1ae19715 | ||
|
|
6aaac12d70 | ||
|
|
3c01c7153b | ||
|
|
7a472e4fcf | ||
|
|
5390f662fc | ||
|
|
c3d427730e | ||
|
|
21fba6eb89 | ||
|
|
d17c37af7d | ||
|
|
82170e96c6 | ||
|
|
decb04efc4 | ||
|
|
3bd0cb36c4 | ||
|
|
553a36302a | ||
|
|
498d7a083a | ||
|
|
3a69931791 | ||
|
|
d4ab328671 | ||
|
|
90500ea67b | ||
|
|
335e765df0 | ||
|
|
448e30bf88 | ||
|
|
26215b6d0a | ||
|
|
f6f76a17e6 | ||
|
|
55a3b69a8e | ||
|
|
22bd108775 | ||
|
|
7ff07fd88c | ||
|
|
2967bec11c | ||
|
|
6357a5c87e | ||
|
|
7e827833bf | ||
|
|
d1507e73fe | ||
|
|
45f1000dea | ||
|
|
04f0996327 | ||
|
|
d8cc0a1f04 | ||
|
|
512c91a969 | ||
|
|
ff8a8ac693 | ||
|
|
908ac5e1b0 | ||
|
|
ea4a7a2368 | ||
|
|
50d5cc2f6a | ||
|
|
5386099559 | ||
|
|
495a9d6d92 | ||
|
|
db58aaff3a | ||
|
|
817218f1cf | ||
|
|
7843de882a | ||
|
|
40d82a2b25 | ||
|
|
a40479d40a | ||
|
|
7cb068ceb2 | ||
|
|
864bf03fee | ||
|
|
9a41db612a | ||
|
|
4781736f99 | ||
|
|
ced79cf4e3 | ||
|
|
33e22713a0 | ||
|
|
92245f0f00 | ||
|
|
4f35f7431a | ||
|
|
84957bbb50 | ||
|
|
c1164bdd7e | ||
|
|
f8be8a61c8 | ||
|
|
c884dc080b | ||
|
|
32a71986d5 | ||
|
|
6da6bc0dfd | ||
|
|
7ccba75621 | ||
|
|
e0e5fa5084 | ||
|
|
799d2bf0db | ||
|
|
65cf80f4ab | ||
|
|
741eaf91c2 | ||
|
|
79b4821499 | ||
|
|
b1ece177b7 | ||
|
|
f9f6eb52dd |
@@ -309,7 +309,8 @@ If you are experiencing performance issues with file searching (e.g., with `@` c
|
|||||||
```
|
```
|
||||||
|
|
||||||
- **`tavilyApiKey`** (string):
|
- **`tavilyApiKey`** (string):
|
||||||
- **Description:** API key for Tavily web search service. Required to enable the `web_search` tool functionality. If not configured, the web search tool will be disabled and skipped.
|
- **Description:** API key for Tavily web search service. Used to enable the `web_search` tool functionality.
|
||||||
|
- **Note:** This is a legacy configuration format. For Qwen OAuth users, DashScope provider is automatically available without any configuration. For other authentication types, configure Tavily or Google providers using the new `webSearch` configuration format.
|
||||||
- **Default:** `undefined` (web search disabled)
|
- **Default:** `undefined` (web search disabled)
|
||||||
- **Example:** `"tavilyApiKey": "tvly-your-api-key-here"`
|
- **Example:** `"tavilyApiKey": "tvly-your-api-key-here"`
|
||||||
- **`chatCompression`** (object):
|
- **`chatCompression`** (object):
|
||||||
@@ -465,8 +466,8 @@ The CLI automatically loads environment variables from an `.env` file. The loadi
|
|||||||
- This is useful for development and testing.
|
- This is useful for development and testing.
|
||||||
- **`TAVILY_API_KEY`**:
|
- **`TAVILY_API_KEY`**:
|
||||||
- Your API key for the Tavily web search service.
|
- Your API key for the Tavily web search service.
|
||||||
- Required to enable the `web_search` tool functionality.
|
- Used to enable the `web_search` tool functionality.
|
||||||
- If not configured, the web search tool will be disabled and skipped.
|
- **Note:** For Qwen OAuth users, DashScope provider is automatically available without any configuration. For other authentication types, configure Tavily or Google providers to enable web search.
|
||||||
- Example: `export TAVILY_API_KEY="tvly-your-api-key-here"`
|
- Example: `export TAVILY_API_KEY="tvly-your-api-key-here"`
|
||||||
|
|
||||||
## Command-Line Arguments
|
## Command-Line Arguments
|
||||||
@@ -540,6 +541,9 @@ Arguments passed directly when running the CLI can override other configurations
|
|||||||
- Displays the version of the CLI.
|
- Displays the version of the CLI.
|
||||||
- **`--openai-logging`**:
|
- **`--openai-logging`**:
|
||||||
- Enables logging of OpenAI API calls for debugging and analysis. This flag overrides the `enableOpenAILogging` setting in `settings.json`.
|
- Enables logging of OpenAI API calls for debugging and analysis. This flag overrides the `enableOpenAILogging` setting in `settings.json`.
|
||||||
|
- **`--openai-logging-dir <directory>`**:
|
||||||
|
- Sets a custom directory path for OpenAI API logs. This flag overrides the `openAILoggingDir` setting in `settings.json`. Supports absolute paths, relative paths, and `~` expansion.
|
||||||
|
- **Example:** `qwen --openai-logging-dir "~/qwen-logs" --openai-logging`
|
||||||
- **`--tavily-api-key <api_key>`**:
|
- **`--tavily-api-key <api_key>`**:
|
||||||
- Sets the Tavily API key for web search functionality for this session.
|
- Sets the Tavily API key for web search functionality for this session.
|
||||||
- Example: `qwen --tavily-api-key tvly-your-api-key-here`
|
- Example: `qwen --tavily-api-key tvly-your-api-key-here`
|
||||||
|
|||||||
@@ -160,9 +160,30 @@ Settings are organized into categories. All settings should be placed within the
|
|||||||
- **Default:** `undefined`
|
- **Default:** `undefined`
|
||||||
|
|
||||||
- **`model.chatCompression.contextPercentageThreshold`** (number):
|
- **`model.chatCompression.contextPercentageThreshold`** (number):
|
||||||
- **Description:** Sets the threshold for chat history compression as a percentage of the model's total token limit. This is a value between 0 and 1 that applies to both automatic compression and the manual `/compress` command. For example, a value of `0.6` will trigger compression when the chat history exceeds 60% of the token limit.
|
- **Description:** Sets the threshold for chat history compression as a percentage of the model's total token limit. This is a value between 0 and 1 that applies to both automatic compression and the manual `/compress` command. For example, a value of `0.6` will trigger compression when the chat history exceeds 60% of the token limit. Use `0` to disable compression entirely.
|
||||||
- **Default:** `0.7`
|
- **Default:** `0.7`
|
||||||
|
|
||||||
|
- **`model.generationConfig`** (object):
|
||||||
|
- **Description:** Advanced overrides passed to the underlying content generator. Supports request controls such as `timeout`, `maxRetries`, and `disableCacheControl`, along with fine-tuning knobs under `samplingParams` (for example `temperature`, `top_p`, `max_tokens`). Leave unset to rely on provider defaults.
|
||||||
|
- **Default:** `undefined`
|
||||||
|
- **Example:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"model": {
|
||||||
|
"generationConfig": {
|
||||||
|
"timeout": 60000,
|
||||||
|
"disableCacheControl": false,
|
||||||
|
"samplingParams": {
|
||||||
|
"temperature": 0.2,
|
||||||
|
"top_p": 0.8,
|
||||||
|
"max_tokens": 1024
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
- **`model.skipNextSpeakerCheck`** (boolean):
|
- **`model.skipNextSpeakerCheck`** (boolean):
|
||||||
- **Description:** Skip the next speaker check.
|
- **Description:** Skip the next speaker check.
|
||||||
- **Default:** `false`
|
- **Default:** `false`
|
||||||
@@ -171,6 +192,22 @@ Settings are organized into categories. All settings should be placed within the
|
|||||||
- **Description:** Disables loop detection checks. Loop detection prevents infinite loops in AI responses but can generate false positives that interrupt legitimate workflows. Enable this option if you experience frequent false positive loop detection interruptions.
|
- **Description:** Disables loop detection checks. Loop detection prevents infinite loops in AI responses but can generate false positives that interrupt legitimate workflows. Enable this option if you experience frequent false positive loop detection interruptions.
|
||||||
- **Default:** `false`
|
- **Default:** `false`
|
||||||
|
|
||||||
|
- **`model.skipStartupContext`** (boolean):
|
||||||
|
- **Description:** Skips sending the startup workspace context (environment summary and acknowledgement) at the beginning of each session. Enable this if you prefer to provide context manually or want to save tokens on startup.
|
||||||
|
- **Default:** `false`
|
||||||
|
|
||||||
|
- **`model.enableOpenAILogging`** (boolean):
|
||||||
|
- **Description:** Enables logging of OpenAI API calls for debugging and analysis. When enabled, API requests and responses are logged to JSON files.
|
||||||
|
- **Default:** `false`
|
||||||
|
|
||||||
|
- **`model.openAILoggingDir`** (string):
|
||||||
|
- **Description:** Custom directory path for OpenAI API logs. If not specified, defaults to `logs/openai` in the current working directory. Supports absolute paths, relative paths (resolved from current working directory), and `~` expansion (home directory).
|
||||||
|
- **Default:** `undefined`
|
||||||
|
- **Examples:**
|
||||||
|
- `"~/qwen-logs"` - Logs to `~/qwen-logs` directory
|
||||||
|
- `"./custom-logs"` - Logs to `./custom-logs` relative to current directory
|
||||||
|
- `"/tmp/openai-logs"` - Logs to absolute path `/tmp/openai-logs`
|
||||||
|
|
||||||
#### `context`
|
#### `context`
|
||||||
|
|
||||||
- **`context.fileName`** (string or array of strings):
|
- **`context.fileName`** (string or array of strings):
|
||||||
@@ -246,6 +283,29 @@ Settings are organized into categories. All settings should be placed within the
|
|||||||
- It must return function output as JSON on `stdout`, analogous to [`functionResponse.response.content`](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#functionresponse).
|
- It must return function output as JSON on `stdout`, analogous to [`functionResponse.response.content`](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#functionresponse).
|
||||||
- **Default:** `undefined`
|
- **Default:** `undefined`
|
||||||
|
|
||||||
|
- **`tools.useRipgrep`** (boolean):
|
||||||
|
- **Description:** Use ripgrep for file content search instead of the fallback implementation. Provides faster search performance.
|
||||||
|
- **Default:** `true`
|
||||||
|
|
||||||
|
- **`tools.useBuiltinRipgrep`** (boolean):
|
||||||
|
- **Description:** Use the bundled ripgrep binary. When set to `false`, the system-level `rg` command will be used instead. This setting is only effective when `tools.useRipgrep` is `true`.
|
||||||
|
- **Default:** `true`
|
||||||
|
|
||||||
|
- **`tools.enableToolOutputTruncation`** (boolean):
|
||||||
|
- **Description:** Enable truncation of large tool outputs.
|
||||||
|
- **Default:** `true`
|
||||||
|
- **Requires restart:** Yes
|
||||||
|
|
||||||
|
- **`tools.truncateToolOutputThreshold`** (number):
|
||||||
|
- **Description:** Truncate tool output if it is larger than this many characters. Applies to Shell, Grep, Glob, ReadFile and ReadManyFiles tools.
|
||||||
|
- **Default:** `25000`
|
||||||
|
- **Requires restart:** Yes
|
||||||
|
|
||||||
|
- **`tools.truncateToolOutputLines`** (number):
|
||||||
|
- **Description:** Maximum lines or entries kept when truncating tool output. Applies to Shell, Grep, Glob, ReadFile and ReadManyFiles tools.
|
||||||
|
- **Default:** `1000`
|
||||||
|
- **Requires restart:** Yes
|
||||||
|
|
||||||
#### `mcp`
|
#### `mcp`
|
||||||
|
|
||||||
- **`mcp.serverCommand`** (string):
|
- **`mcp.serverCommand`** (string):
|
||||||
@@ -297,7 +357,8 @@ Settings are organized into categories. All settings should be placed within the
|
|||||||
- **Default:** `undefined`
|
- **Default:** `undefined`
|
||||||
|
|
||||||
- **`advanced.tavilyApiKey`** (string):
|
- **`advanced.tavilyApiKey`** (string):
|
||||||
- **Description:** API key for Tavily web search service. Required to enable the `web_search` tool functionality. If not configured, the web search tool will be disabled and skipped.
|
- **Description:** API key for Tavily web search service. Used to enable the `web_search` tool functionality.
|
||||||
|
- **Note:** This is a legacy configuration format. For Qwen OAuth users, DashScope provider is automatically available without any configuration. For other authentication types, configure Tavily or Google providers using the new `webSearch` configuration format.
|
||||||
- **Default:** `undefined`
|
- **Default:** `undefined`
|
||||||
|
|
||||||
#### `mcpServers`
|
#### `mcpServers`
|
||||||
@@ -378,6 +439,8 @@ Here is an example of a `settings.json` file with the nested structure, new as o
|
|||||||
"model": {
|
"model": {
|
||||||
"name": "qwen3-coder-plus",
|
"name": "qwen3-coder-plus",
|
||||||
"maxSessionTurns": 10,
|
"maxSessionTurns": 10,
|
||||||
|
"enableOpenAILogging": false,
|
||||||
|
"openAILoggingDir": "~/qwen-logs",
|
||||||
"summarizeToolOutput": {
|
"summarizeToolOutput": {
|
||||||
"run_shell_command": {
|
"run_shell_command": {
|
||||||
"tokenBudget": 100
|
"tokenBudget": 100
|
||||||
@@ -466,8 +529,8 @@ The CLI automatically loads environment variables from an `.env` file. The loadi
|
|||||||
- Set to a string to customize the title of the CLI.
|
- Set to a string to customize the title of the CLI.
|
||||||
- **`TAVILY_API_KEY`**:
|
- **`TAVILY_API_KEY`**:
|
||||||
- Your API key for the Tavily web search service.
|
- Your API key for the Tavily web search service.
|
||||||
- Required to enable the `web_search` tool functionality.
|
- Used to enable the `web_search` tool functionality.
|
||||||
- If not configured, the web search tool will be disabled and skipped.
|
- **Note:** For Qwen OAuth users, DashScope provider is automatically available without any configuration. For other authentication types, configure Tavily or Google providers to enable web search.
|
||||||
- Example: `export TAVILY_API_KEY="tvly-your-api-key-here"`
|
- Example: `export TAVILY_API_KEY="tvly-your-api-key-here"`
|
||||||
|
|
||||||
## Command-Line Arguments
|
## Command-Line Arguments
|
||||||
@@ -515,7 +578,7 @@ Arguments passed directly when running the CLI can override other configurations
|
|||||||
- Example: `qwen --approval-mode auto-edit`
|
- Example: `qwen --approval-mode auto-edit`
|
||||||
- **`--allowed-tools <tool1,tool2,...>`**:
|
- **`--allowed-tools <tool1,tool2,...>`**:
|
||||||
- A comma-separated list of tool names that will bypass the confirmation dialog.
|
- A comma-separated list of tool names that will bypass the confirmation dialog.
|
||||||
- Example: `qwen --allowed-tools "ShellTool(git status)"`
|
- Example: `qwen --allowed-tools "Shell(git status)"`
|
||||||
- **`--telemetry`**:
|
- **`--telemetry`**:
|
||||||
- Enables [telemetry](../telemetry.md).
|
- Enables [telemetry](../telemetry.md).
|
||||||
- **`--telemetry-target`**:
|
- **`--telemetry-target`**:
|
||||||
@@ -548,6 +611,9 @@ Arguments passed directly when running the CLI can override other configurations
|
|||||||
- Displays the version of the CLI.
|
- Displays the version of the CLI.
|
||||||
- **`--openai-logging`**:
|
- **`--openai-logging`**:
|
||||||
- Enables logging of OpenAI API calls for debugging and analysis. This flag overrides the `enableOpenAILogging` setting in `settings.json`.
|
- Enables logging of OpenAI API calls for debugging and analysis. This flag overrides the `enableOpenAILogging` setting in `settings.json`.
|
||||||
|
- **`--openai-logging-dir <directory>`**:
|
||||||
|
- Sets a custom directory path for OpenAI API logs. This flag overrides the `openAILoggingDir` setting in `settings.json`. Supports absolute paths, relative paths, and `~` expansion.
|
||||||
|
- **Example:** `qwen --openai-logging-dir "~/qwen-logs" --openai-logging`
|
||||||
- **`--tavily-api-key <api_key>`**:
|
- **`--tavily-api-key <api_key>`**:
|
||||||
- Sets the Tavily API key for web search functionality for this session.
|
- Sets the Tavily API key for web search functionality for this session.
|
||||||
- Example: `qwen --tavily-api-key tvly-your-api-key-here`
|
- Example: `qwen --tavily-api-key tvly-your-api-key-here`
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ The Qwen Code core (`packages/core`) features a robust system for defining, regi
|
|||||||
- **Returning Rich Content:** Tools are not limited to returning simple text. The `llmContent` can be a `PartListUnion`, which is an array that can contain a mix of `Part` objects (for images, audio, etc.) and `string`s. This allows a single tool execution to return multiple pieces of rich content.
|
- **Returning Rich Content:** Tools are not limited to returning simple text. The `llmContent` can be a `PartListUnion`, which is an array that can contain a mix of `Part` objects (for images, audio, etc.) and `string`s. This allows a single tool execution to return multiple pieces of rich content.
|
||||||
|
|
||||||
- **Tool Registry (`tool-registry.ts`):** A class (`ToolRegistry`) responsible for:
|
- **Tool Registry (`tool-registry.ts`):** A class (`ToolRegistry`) responsible for:
|
||||||
- **Registering Tools:** Holding a collection of all available built-in tools (e.g., `ReadFileTool`, `ShellTool`).
|
- **Registering Tools:** Holding a collection of all available built-in tools (e.g., `ListFiles`, `ReadFile`).
|
||||||
- **Discovering Tools:** It can also discover tools dynamically:
|
- **Discovering Tools:** It can also discover tools dynamically:
|
||||||
- **Command-based Discovery:** If `tools.toolDiscoveryCommand` is configured in settings, this command is executed. It's expected to output JSON describing custom tools, which are then registered as `DiscoveredTool` instances.
|
- **Command-based Discovery:** If `tools.toolDiscoveryCommand` is configured in settings, this command is executed. It's expected to output JSON describing custom tools, which are then registered as `DiscoveredTool` instances.
|
||||||
- **MCP-based Discovery:** If `mcp.mcpServerCommand` is configured, the registry can connect to a Model Context Protocol (MCP) server to list and register tools (`DiscoveredMCPTool`).
|
- **MCP-based Discovery:** If `mcp.mcpServerCommand` is configured, the registry can connect to a Model Context Protocol (MCP) server to list and register tools (`DiscoveredMCPTool`).
|
||||||
@@ -33,20 +33,24 @@ The Qwen Code core (`packages/core`) features a robust system for defining, regi
|
|||||||
The core comes with a suite of pre-defined tools, typically found in `packages/core/src/tools/`. These include:
|
The core comes with a suite of pre-defined tools, typically found in `packages/core/src/tools/`. These include:
|
||||||
|
|
||||||
- **File System Tools:**
|
- **File System Tools:**
|
||||||
- `LSTool` (`ls.ts`): Lists directory contents.
|
- `ListFiles` (`ls.ts`): Lists directory contents.
|
||||||
- `ReadFileTool` (`read-file.ts`): Reads the content of a single file. It takes an `absolute_path` parameter, which must be an absolute path.
|
- `ReadFile` (`read-file.ts`): Reads the content of a single file. It takes an `absolute_path` parameter, which must be an absolute path.
|
||||||
- `WriteFileTool` (`write-file.ts`): Writes content to a file.
|
- `WriteFile` (`write-file.ts`): Writes content to a file.
|
||||||
- `GrepTool` (`grep.ts`): Searches for patterns in files.
|
- `ReadManyFiles` (`read-many-files.ts`): Reads and concatenates content from multiple files or glob patterns (used by the `@` command in CLI).
|
||||||
- `GlobTool` (`glob.ts`): Finds files matching glob patterns.
|
- `Grep` (`grep.ts`): Searches for patterns in files.
|
||||||
- `EditTool` (`edit.ts`): Performs in-place modifications to files (often requiring confirmation).
|
- `Glob` (`glob.ts`): Finds files matching glob patterns.
|
||||||
- `ReadManyFilesTool` (`read-many-files.ts`): Reads and concatenates content from multiple files or glob patterns (used by the `@` command in CLI).
|
- `Edit` (`edit.ts`): Performs in-place modifications to files (often requiring confirmation).
|
||||||
- **Execution Tools:**
|
- **Execution Tools:**
|
||||||
- `ShellTool` (`shell.ts`): Executes arbitrary shell commands (requires careful sandboxing and user confirmation).
|
- `Shell` (`shell.ts`): Executes arbitrary shell commands (requires careful sandboxing and user confirmation).
|
||||||
- **Web Tools:**
|
- **Web Tools:**
|
||||||
- `WebFetchTool` (`web-fetch.ts`): Fetches content from a URL.
|
- `WebFetch` (`web-fetch.ts`): Fetches content from a URL.
|
||||||
- `WebSearchTool` (`web-search.ts`): Performs a web search.
|
- `WebSearch` (`web-search.ts`): Performs a web search.
|
||||||
- **Memory Tools:**
|
- **Memory Tools:**
|
||||||
- `MemoryTool` (`memoryTool.ts`): Interacts with the AI's memory.
|
- `SaveMemory` (`memoryTool.ts`): Interacts with the AI's memory.
|
||||||
|
- **Planning Tools:**
|
||||||
|
- `Task` (`task.ts`): Delegates tasks to specialized subagents.
|
||||||
|
- `TodoWrite` (`todoWrite.ts`): Creates and manages a structured task list.
|
||||||
|
- `ExitPlanMode` (`exitPlanMode.ts`): Exits plan mode and returns to normal operation.
|
||||||
|
|
||||||
Each of these tools extends `BaseTool` and implements the required methods for its specific functionality.
|
Each of these tools extends `BaseTool` and implements the required methods for its specific functionality.
|
||||||
|
|
||||||
|
|||||||
@@ -107,7 +107,7 @@ The `qwen-extension.json` file contains the configuration for the extension. The
|
|||||||
- `mcpServers`: A map of MCP servers to configure. The key is the name of the server, and the value is the server configuration. These servers will be loaded on startup just like MCP servers configured in a [`settings.json` file](./cli/configuration.md). If both an extension and a `settings.json` file configure an MCP server with the same name, the server defined in the `settings.json` file takes precedence.
|
- `mcpServers`: A map of MCP servers to configure. The key is the name of the server, and the value is the server configuration. These servers will be loaded on startup just like MCP servers configured in a [`settings.json` file](./cli/configuration.md). If both an extension and a `settings.json` file configure an MCP server with the same name, the server defined in the `settings.json` file takes precedence.
|
||||||
- Note that all MCP server configuration options are supported except for `trust`.
|
- Note that all MCP server configuration options are supported except for `trust`.
|
||||||
- `contextFileName`: The name of the file that contains the context for the extension. This will be used to load the context from the extension directory. If this property is not used but a `QWEN.md` file is present in your extension directory, then that file will be loaded.
|
- `contextFileName`: The name of the file that contains the context for the extension. This will be used to load the context from the extension directory. If this property is not used but a `QWEN.md` file is present in your extension directory, then that file will be loaded.
|
||||||
- `excludeTools`: An array of tool names to exclude from the model. You can also specify command-specific restrictions for tools that support it, like the `run_shell_command` tool. For example, `"excludeTools": ["run_shell_command(rm -rf)"]` will block the `rm -rf` command. Note that this differs from the MCP server `excludeTools` functionality, which can be listed in the MCP server config.
|
- `excludeTools`: An array of tool names to exclude from the model. You can also specify command-specific restrictions for tools that support it, like the `run_shell_command` tool. For example, `"excludeTools": ["run_shell_command(rm -rf)"]` will block the `rm -rf` command. Note that this differs from the MCP server `excludeTools` functionality, which can be listed in the MCP server config. **Important:** Tools specified in `excludeTools` will be disabled for the entire conversation context and will affect all subsequent queries in the current session.
|
||||||
|
|
||||||
When Qwen Code starts, it loads all the extensions and merges their configurations. If there are any conflicts, the workspace configuration takes precedence.
|
When Qwen Code starts, it loads all the extensions and merges their configurations. If there are any conflicts, the workspace configuration takes precedence.
|
||||||
|
|
||||||
|
|||||||
@@ -106,7 +106,10 @@ Subagents are configured using Markdown files with YAML frontmatter. This format
|
|||||||
---
|
---
|
||||||
name: agent-name
|
name: agent-name
|
||||||
description: Brief description of when and how to use this agent
|
description: Brief description of when and how to use this agent
|
||||||
tools: tool1, tool2, tool3 # Optional
|
tools:
|
||||||
|
- tool1
|
||||||
|
- tool2
|
||||||
|
- tool3 # Optional
|
||||||
---
|
---
|
||||||
|
|
||||||
System prompt content goes here.
|
System prompt content goes here.
|
||||||
@@ -167,7 +170,11 @@ Perfect for comprehensive test creation and test-driven development.
|
|||||||
---
|
---
|
||||||
name: testing-expert
|
name: testing-expert
|
||||||
description: Writes comprehensive unit tests, integration tests, and handles test automation with best practices
|
description: Writes comprehensive unit tests, integration tests, and handles test automation with best practices
|
||||||
tools: read_file, write_file, read_many_files, run_shell_command
|
tools:
|
||||||
|
- read_file
|
||||||
|
- write_file
|
||||||
|
- read_many_files
|
||||||
|
- run_shell_command
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a testing specialist focused on creating high-quality, maintainable tests.
|
You are a testing specialist focused on creating high-quality, maintainable tests.
|
||||||
@@ -207,7 +214,11 @@ Specialized in creating clear, comprehensive documentation.
|
|||||||
---
|
---
|
||||||
name: documentation-writer
|
name: documentation-writer
|
||||||
description: Creates comprehensive documentation, README files, API docs, and user guides
|
description: Creates comprehensive documentation, README files, API docs, and user guides
|
||||||
tools: read_file, write_file, read_many_files, web_search
|
tools:
|
||||||
|
- read_file
|
||||||
|
- write_file
|
||||||
|
- read_many_files
|
||||||
|
- web_search
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a technical documentation specialist for ${project_name}.
|
You are a technical documentation specialist for ${project_name}.
|
||||||
@@ -256,7 +267,9 @@ Focused on code quality, security, and best practices.
|
|||||||
---
|
---
|
||||||
name: code-reviewer
|
name: code-reviewer
|
||||||
description: Reviews code for best practices, security issues, performance, and maintainability
|
description: Reviews code for best practices, security issues, performance, and maintainability
|
||||||
tools: read_file, read_many_files
|
tools:
|
||||||
|
- read_file
|
||||||
|
- read_many_files
|
||||||
---
|
---
|
||||||
|
|
||||||
You are an experienced code reviewer focused on quality, security, and maintainability.
|
You are an experienced code reviewer focused on quality, security, and maintainability.
|
||||||
@@ -298,7 +311,11 @@ Optimized for React development, hooks, and component patterns.
|
|||||||
---
|
---
|
||||||
name: react-specialist
|
name: react-specialist
|
||||||
description: Expert in React development, hooks, component patterns, and modern React best practices
|
description: Expert in React development, hooks, component patterns, and modern React best practices
|
||||||
tools: read_file, write_file, read_many_files, run_shell_command
|
tools:
|
||||||
|
- read_file
|
||||||
|
- write_file
|
||||||
|
- read_many_files
|
||||||
|
- run_shell_command
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a React specialist with deep expertise in modern React development.
|
You are a React specialist with deep expertise in modern React development.
|
||||||
@@ -339,7 +356,11 @@ Specialized in Python development, frameworks, and best practices.
|
|||||||
---
|
---
|
||||||
name: python-expert
|
name: python-expert
|
||||||
description: Expert in Python development, frameworks, testing, and Python-specific best practices
|
description: Expert in Python development, frameworks, testing, and Python-specific best practices
|
||||||
tools: read_file, write_file, read_many_files, run_shell_command
|
tools:
|
||||||
|
- read_file
|
||||||
|
- write_file
|
||||||
|
- read_many_files
|
||||||
|
- run_shell_command
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a Python expert with deep knowledge of the Python ecosystem.
|
You are a Python expert with deep knowledge of the Python ecosystem.
|
||||||
|
|||||||
@@ -4,12 +4,12 @@ Qwen Code provides a comprehensive suite of tools for interacting with the local
|
|||||||
|
|
||||||
**Note:** All file system tools operate within a `rootDirectory` (usually the current working directory where you launched the CLI) for security. Paths that you provide to these tools are generally expected to be absolute or are resolved relative to this root directory.
|
**Note:** All file system tools operate within a `rootDirectory` (usually the current working directory where you launched the CLI) for security. Paths that you provide to these tools are generally expected to be absolute or are resolved relative to this root directory.
|
||||||
|
|
||||||
## 1. `list_directory` (ReadFolder)
|
## 1. `list_directory` (ListFiles)
|
||||||
|
|
||||||
`list_directory` lists the names of files and subdirectories directly within a specified directory path. It can optionally ignore entries matching provided glob patterns.
|
`list_directory` lists the names of files and subdirectories directly within a specified directory path. It can optionally ignore entries matching provided glob patterns.
|
||||||
|
|
||||||
- **Tool name:** `list_directory`
|
- **Tool name:** `list_directory`
|
||||||
- **Display name:** ReadFolder
|
- **Display name:** ListFiles
|
||||||
- **File:** `ls.ts`
|
- **File:** `ls.ts`
|
||||||
- **Parameters:**
|
- **Parameters:**
|
||||||
- `path` (string, required): The absolute path to the directory to list.
|
- `path` (string, required): The absolute path to the directory to list.
|
||||||
@@ -59,86 +59,80 @@ Qwen Code provides a comprehensive suite of tools for interacting with the local
|
|||||||
- **Output (`llmContent`):** A success message, e.g., `Successfully overwrote file: /path/to/your/file.txt` or `Successfully created and wrote to new file: /path/to/new/file.txt`.
|
- **Output (`llmContent`):** A success message, e.g., `Successfully overwrote file: /path/to/your/file.txt` or `Successfully created and wrote to new file: /path/to/new/file.txt`.
|
||||||
- **Confirmation:** Yes. Shows a diff of changes and asks for user approval before writing.
|
- **Confirmation:** Yes. Shows a diff of changes and asks for user approval before writing.
|
||||||
|
|
||||||
## 4. `glob` (FindFiles)
|
## 4. `glob` (Glob)
|
||||||
|
|
||||||
`glob` finds files matching specific glob patterns (e.g., `src/**/*.ts`, `*.md`), returning absolute paths sorted by modification time (newest first).
|
`glob` finds files matching specific glob patterns (e.g., `src/**/*.ts`, `*.md`), returning absolute paths sorted by modification time (newest first).
|
||||||
|
|
||||||
- **Tool name:** `glob`
|
- **Tool name:** `glob`
|
||||||
- **Display name:** FindFiles
|
- **Display name:** Glob
|
||||||
- **File:** `glob.ts`
|
- **File:** `glob.ts`
|
||||||
- **Parameters:**
|
- **Parameters:**
|
||||||
- `pattern` (string, required): The glob pattern to match against (e.g., `"*.py"`, `"src/**/*.js"`).
|
- `pattern` (string, required): The glob pattern to match against (e.g., `"*.py"`, `"src/**/*.js"`).
|
||||||
- `path` (string, optional): The absolute path to the directory to search within. If omitted, searches the tool's root directory.
|
- `path` (string, optional): The directory to search in. If not specified, the current working directory will be used.
|
||||||
- `case_sensitive` (boolean, optional): Whether the search should be case-sensitive. Defaults to `false`.
|
|
||||||
- `respect_git_ignore` (boolean, optional): Whether to respect .gitignore patterns when finding files. Defaults to `true`.
|
|
||||||
- **Behavior:**
|
- **Behavior:**
|
||||||
- Searches for files matching the glob pattern within the specified directory.
|
- Searches for files matching the glob pattern within the specified directory.
|
||||||
- Returns a list of absolute paths, sorted with the most recently modified files first.
|
- Returns a list of absolute paths, sorted with the most recently modified files first.
|
||||||
- Ignores common nuisance directories like `node_modules` and `.git` by default.
|
- Respects .gitignore and .qwenignore patterns by default.
|
||||||
- **Output (`llmContent`):** A message like: `Found 5 file(s) matching "*.ts" within src, sorted by modification time (newest first):\nsrc/file1.ts\nsrc/subdir/file2.ts...`
|
- Limits results to 100 files to prevent context overflow.
|
||||||
|
- **Output (`llmContent`):** A message like: `Found 5 file(s) matching "*.ts" within /path/to/search/dir, sorted by modification time (newest first):\n---\n/path/to/file1.ts\n/path/to/subdir/file2.ts\n---\n[95 files truncated] ...`
|
||||||
- **Confirmation:** No.
|
- **Confirmation:** No.
|
||||||
|
|
||||||
## 5. `search_file_content` (SearchText)
|
## 5. `grep_search` (Grep)
|
||||||
|
|
||||||
`search_file_content` searches for a regular expression pattern within the content of files in a specified directory. Can filter files by a glob pattern. Returns the lines containing matches, along with their file paths and line numbers.
|
`grep_search` searches for a regular expression pattern within the content of files in a specified directory. Can filter files by a glob pattern. Returns the lines containing matches, along with their file paths and line numbers.
|
||||||
|
|
||||||
- **Tool name:** `search_file_content`
|
- **Tool name:** `grep_search`
|
||||||
- **Display name:** SearchText
|
- **Display name:** Grep
|
||||||
- **File:** `grep.ts`
|
- **File:** `ripGrep.ts` (with `grep.ts` as fallback)
|
||||||
- **Parameters:**
|
- **Parameters:**
|
||||||
- `pattern` (string, required): The regular expression (regex) to search for (e.g., `"function\s+myFunction"`).
|
- `pattern` (string, required): The regular expression pattern to search for in file contents (e.g., `"function\\s+myFunction"`, `"log.*Error"`).
|
||||||
- `path` (string, optional): The absolute path to the directory to search within. Defaults to the current working directory.
|
- `path` (string, optional): File or directory to search in. Defaults to current working directory.
|
||||||
- `include` (string, optional): A glob pattern to filter which files are searched (e.g., `"*.js"`, `"src/**/*.{ts,tsx}"`). If omitted, searches most files (respecting common ignores).
|
- `glob` (string, optional): Glob pattern to filter files (e.g. `"*.js"`, `"src/**/*.{ts,tsx}"`).
|
||||||
- `maxResults` (number, optional): Maximum number of matches to return to prevent context overflow (default: 20, max: 100). Use lower values for broad searches, higher for specific searches.
|
- `limit` (number, optional): Limit output to first N matching lines. Optional - shows all matches if not specified.
|
||||||
- **Behavior:**
|
- **Behavior:**
|
||||||
- Uses `git grep` if available in a Git repository for speed; otherwise, falls back to system `grep` or a JavaScript-based search.
|
- Uses ripgrep for fast search when available; otherwise falls back to a JavaScript-based search implementation.
|
||||||
- Returns a list of matching lines, each prefixed with its file path (relative to the search directory) and line number.
|
- Returns matching lines with file paths and line numbers.
|
||||||
- Limits results to a maximum of 20 matches by default to prevent context overflow. When results are truncated, shows a clear warning with guidance on refining searches.
|
- Case-insensitive by default.
|
||||||
|
- Respects .gitignore and .qwenignore patterns.
|
||||||
|
- Limits output to prevent context overflow.
|
||||||
- **Output (`llmContent`):** A formatted string of matches, e.g.:
|
- **Output (`llmContent`):** A formatted string of matches, e.g.:
|
||||||
|
|
||||||
```
|
```
|
||||||
Found 3 matches for pattern "myFunction" in path "." (filter: "*.ts"):
|
Found 3 matches for pattern "myFunction" in path "." (filter: "*.ts"):
|
||||||
---
|
---
|
||||||
File: src/utils.ts
|
src/utils.ts:15:export function myFunction() {
|
||||||
L15: export function myFunction() {
|
src/utils.ts:22: myFunction.call();
|
||||||
L22: myFunction.call();
|
src/index.ts:5:import { myFunction } from './utils';
|
||||||
---
|
|
||||||
File: src/index.ts
|
|
||||||
L5: import { myFunction } from './utils';
|
|
||||||
---
|
---
|
||||||
|
|
||||||
WARNING: Results truncated to prevent context overflow. To see more results:
|
[0 lines truncated] ...
|
||||||
- Use a more specific pattern to reduce matches
|
|
||||||
- Add file filters with the 'include' parameter (e.g., "*.js", "src/**")
|
|
||||||
- Specify a narrower 'path' to search in a subdirectory
|
|
||||||
- Increase 'maxResults' parameter if you need more matches (current: 20)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
- **Confirmation:** No.
|
- **Confirmation:** No.
|
||||||
|
|
||||||
### `search_file_content` examples
|
### `grep_search` examples
|
||||||
|
|
||||||
Search for a pattern with default result limiting:
|
Search for a pattern with default result limiting:
|
||||||
|
|
||||||
```
|
```
|
||||||
search_file_content(pattern="function\s+myFunction", path="src")
|
grep_search(pattern="function\\s+myFunction", path="src")
|
||||||
```
|
```
|
||||||
|
|
||||||
Search for a pattern with custom result limiting:
|
Search for a pattern with custom result limiting:
|
||||||
|
|
||||||
```
|
```
|
||||||
search_file_content(pattern="function", path="src", maxResults=50)
|
grep_search(pattern="function", path="src", limit=50)
|
||||||
```
|
```
|
||||||
|
|
||||||
Search for a pattern with file filtering and custom result limiting:
|
Search for a pattern with file filtering and custom result limiting:
|
||||||
|
|
||||||
```
|
```
|
||||||
search_file_content(pattern="function", include="*.js", maxResults=10)
|
grep_search(pattern="function", glob="*.js", limit=10)
|
||||||
```
|
```
|
||||||
|
|
||||||
## 6. `edit` (Edit)
|
## 6. `edit` (Edit)
|
||||||
|
|
||||||
`edit` replaces text within a file. By default, replaces a single occurrence, but can replace multiple occurrences when `expected_replacements` is specified. This tool is designed for precise, targeted changes and requires significant context around the `old_string` to ensure it modifies the correct location.
|
`edit` replaces text within a file. By default it requires `old_string` to match a single unique location; set `replace_all` to `true` when you intentionally want to change every occurrence. This tool is designed for precise, targeted changes and requires significant context around the `old_string` to ensure it modifies the correct location.
|
||||||
|
|
||||||
- **Tool name:** `edit`
|
- **Tool name:** `edit`
|
||||||
- **Display name:** Edit
|
- **Display name:** Edit
|
||||||
@@ -150,12 +144,12 @@ search_file_content(pattern="function", include="*.js", maxResults=10)
|
|||||||
**CRITICAL:** This string must uniquely identify the single instance to change. It should include at least 3 lines of context _before_ and _after_ the target text, matching whitespace and indentation precisely. If `old_string` is empty, the tool attempts to create a new file at `file_path` with `new_string` as content.
|
**CRITICAL:** This string must uniquely identify the single instance to change. It should include at least 3 lines of context _before_ and _after_ the target text, matching whitespace and indentation precisely. If `old_string` is empty, the tool attempts to create a new file at `file_path` with `new_string` as content.
|
||||||
|
|
||||||
- `new_string` (string, required): The exact literal text to replace `old_string` with.
|
- `new_string` (string, required): The exact literal text to replace `old_string` with.
|
||||||
- `expected_replacements` (number, optional): The number of occurrences to replace. Defaults to `1`.
|
- `replace_all` (boolean, optional): Replace all occurrences of `old_string`. Defaults to `false`.
|
||||||
|
|
||||||
- **Behavior:**
|
- **Behavior:**
|
||||||
- If `old_string` is empty and `file_path` does not exist, creates a new file with `new_string` as content.
|
- If `old_string` is empty and `file_path` does not exist, creates a new file with `new_string` as content.
|
||||||
- If `old_string` is provided, it reads the `file_path` and attempts to find exactly one occurrence of `old_string`.
|
- If `old_string` is provided, it reads the `file_path` and attempts to find exactly one occurrence unless `replace_all` is true.
|
||||||
- If one occurrence is found, it replaces it with `new_string`.
|
- If the match is unique (or `replace_all` is true), it replaces the text with `new_string`.
|
||||||
- **Enhanced Reliability (Multi-Stage Edit Correction):** To significantly improve the success rate of edits, especially when the model-provided `old_string` might not be perfectly precise, the tool incorporates a multi-stage edit correction mechanism.
|
- **Enhanced Reliability (Multi-Stage Edit Correction):** To significantly improve the success rate of edits, especially when the model-provided `old_string` might not be perfectly precise, the tool incorporates a multi-stage edit correction mechanism.
|
||||||
- If the initial `old_string` isn't found or matches multiple locations, the tool can leverage the Qwen model to iteratively refine `old_string` (and potentially `new_string`).
|
- If the initial `old_string` isn't found or matches multiple locations, the tool can leverage the Qwen model to iteratively refine `old_string` (and potentially `new_string`).
|
||||||
- This self-correction process attempts to identify the unique segment the model intended to modify, making the `edit` operation more robust even with slightly imperfect initial context.
|
- This self-correction process attempts to identify the unique segment the model intended to modify, making the `edit` operation more robust even with slightly imperfect initial context.
|
||||||
@@ -164,10 +158,10 @@ search_file_content(pattern="function", include="*.js", maxResults=10)
|
|||||||
- `old_string` is not empty, but the `file_path` does not exist.
|
- `old_string` is not empty, but the `file_path` does not exist.
|
||||||
- `old_string` is empty, but the `file_path` already exists.
|
- `old_string` is empty, but the `file_path` already exists.
|
||||||
- `old_string` is not found in the file after attempts to correct it.
|
- `old_string` is not found in the file after attempts to correct it.
|
||||||
- `old_string` is found multiple times, and the self-correction mechanism cannot resolve it to a single, unambiguous match.
|
- `old_string` is found multiple times, `replace_all` is false, and the self-correction mechanism cannot resolve it to a single, unambiguous match.
|
||||||
- **Output (`llmContent`):**
|
- **Output (`llmContent`):**
|
||||||
- On success: `Successfully modified file: /path/to/file.txt (1 replacements).` or `Created new file: /path/to/new_file.txt with provided content.`
|
- On success: `Successfully modified file: /path/to/file.txt (1 replacements).` or `Created new file: /path/to/new_file.txt with provided content.`
|
||||||
- On failure: An error message explaining the reason (e.g., `Failed to edit, 0 occurrences found...`, `Failed to edit, expected 1 occurrences but found 2...`).
|
- On failure: An error message explaining the reason (e.g., `Failed to edit, 0 occurrences found...`, `Failed to edit because the text matches multiple locations...`).
|
||||||
- **Confirmation:** Yes. Shows a diff of the proposed changes and asks for user approval before writing to the file.
|
- **Confirmation:** Yes. Shows a diff of the proposed changes and asks for user approval before writing to the file.
|
||||||
|
|
||||||
These file system tools provide a foundation for Qwen Code to understand and interact with your local project context.
|
These file system tools provide a foundation for Qwen Code to understand and interact with your local project context.
|
||||||
|
|||||||
@@ -1,43 +1,186 @@
|
|||||||
# Web Search Tool (`web_search`)
|
# Web Search Tool (`web_search`)
|
||||||
|
|
||||||
This document describes the `web_search` tool.
|
This document describes the `web_search` tool for performing web searches using multiple providers.
|
||||||
|
|
||||||
## Description
|
## Description
|
||||||
|
|
||||||
Use `web_search` to perform a web search using the Tavily API. The tool returns a concise answer with sources when possible.
|
Use `web_search` to perform a web search and get information from the internet. The tool supports multiple search providers and returns a concise answer with source citations when available.
|
||||||
|
|
||||||
|
### Supported Providers
|
||||||
|
|
||||||
|
1. **DashScope** (Official, Free) - Automatically available for Qwen OAuth users (200 requests/minute, 2000 requests/day)
|
||||||
|
2. **Tavily** - High-quality search API with built-in answer generation
|
||||||
|
3. **Google Custom Search** - Google's Custom Search JSON API
|
||||||
|
|
||||||
### Arguments
|
### Arguments
|
||||||
|
|
||||||
`web_search` takes one argument:
|
`web_search` takes two arguments:
|
||||||
|
|
||||||
- `query` (string, required): The search query.
|
- `query` (string, required): The search query
|
||||||
|
- `provider` (string, optional): Specific provider to use ("dashscope", "tavily", "google")
|
||||||
|
- If not specified, uses the default provider from configuration
|
||||||
|
|
||||||
## How to use `web_search`
|
## Configuration
|
||||||
|
|
||||||
`web_search` calls the Tavily API directly. You must configure the `TAVILY_API_KEY` through one of the following methods:
|
### Method 1: Settings File (Recommended)
|
||||||
|
|
||||||
1. **Settings file**: Add `"tavilyApiKey": "your-key-here"` to your `settings.json`
|
Add to your `settings.json`:
|
||||||
2. **Environment variable**: Set `TAVILY_API_KEY` in your environment or `.env` file
|
|
||||||
3. **Command line**: Use `--tavily-api-key your-key-here` when running the CLI
|
|
||||||
|
|
||||||
If the key is not configured, the tool will be disabled and skipped.
|
```json
|
||||||
|
{
|
||||||
Usage:
|
"webSearch": {
|
||||||
|
"provider": [
|
||||||
```
|
{ "type": "dashscope" },
|
||||||
web_search(query="Your query goes here.")
|
{ "type": "tavily", "apiKey": "tvly-xxxxx" },
|
||||||
|
{
|
||||||
|
"type": "google",
|
||||||
|
"apiKey": "your-google-api-key",
|
||||||
|
"searchEngineId": "your-search-engine-id"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"default": "dashscope"
|
||||||
|
}
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## `web_search` examples
|
**Notes:**
|
||||||
|
|
||||||
Get information on a topic:
|
- DashScope doesn't require an API key (official, free service)
|
||||||
|
- **Qwen OAuth users:** DashScope is automatically added to your provider list, even if not explicitly configured
|
||||||
|
- Configure additional providers (Tavily, Google) if you want to use them alongside DashScope
|
||||||
|
- Set `default` to specify which provider to use by default (if not set, priority order: Tavily > Google > DashScope)
|
||||||
|
|
||||||
```
|
### Method 2: Environment Variables
|
||||||
web_search(query="latest advancements in AI-powered code generation")
|
|
||||||
|
Set environment variables in your shell or `.env` file:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Tavily
|
||||||
|
export TAVILY_API_KEY="tvly-xxxxx"
|
||||||
|
|
||||||
|
# Google
|
||||||
|
export GOOGLE_API_KEY="your-api-key"
|
||||||
|
export GOOGLE_SEARCH_ENGINE_ID="your-engine-id"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Important notes
|
### Method 3: Command Line Arguments
|
||||||
|
|
||||||
- **Response returned:** The `web_search` tool returns a concise answer when available, with a list of source links.
|
Pass API keys when running Qwen Code:
|
||||||
- **Citations:** Source links are appended as a numbered list.
|
|
||||||
- **API key:** Configure `TAVILY_API_KEY` via settings.json, environment variables, .env files, or command line arguments. If not configured, the tool is not registered.
|
```bash
|
||||||
|
# Tavily
|
||||||
|
qwen --tavily-api-key tvly-xxxxx
|
||||||
|
|
||||||
|
# Google
|
||||||
|
qwen --google-api-key your-key --google-search-engine-id your-id
|
||||||
|
|
||||||
|
# Specify default provider
|
||||||
|
qwen --web-search-default tavily
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backward Compatibility (Deprecated)
|
||||||
|
|
||||||
|
⚠️ **DEPRECATED:** The legacy `tavilyApiKey` configuration is still supported for backward compatibility but is deprecated:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"advanced": {
|
||||||
|
"tavilyApiKey": "tvly-xxxxx" // ⚠️ Deprecated
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important:** This configuration is deprecated and will be removed in a future version. Please migrate to the new `webSearch` configuration format shown above. The old configuration will automatically configure Tavily as a provider, but we strongly recommend updating your configuration.
|
||||||
|
|
||||||
|
## Disabling Web Search
|
||||||
|
|
||||||
|
If you want to disable the web search functionality, you can exclude the `web_search` tool in your `settings.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"tools": {
|
||||||
|
"exclude": ["web_search"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** This setting requires a restart of Qwen Code to take effect. Once disabled, the `web_search` tool will not be available to the model, even if web search providers are configured.
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Basic search (using default provider)
|
||||||
|
|
||||||
|
```
|
||||||
|
web_search(query="latest advancements in AI")
|
||||||
|
```
|
||||||
|
|
||||||
|
### Search with specific provider
|
||||||
|
|
||||||
|
```
|
||||||
|
web_search(query="latest advancements in AI", provider="tavily")
|
||||||
|
```
|
||||||
|
|
||||||
|
### Real-world examples
|
||||||
|
|
||||||
|
```
|
||||||
|
web_search(query="weather in San Francisco today")
|
||||||
|
web_search(query="latest Node.js LTS version", provider="google")
|
||||||
|
web_search(query="best practices for React 19", provider="dashscope")
|
||||||
|
```
|
||||||
|
|
||||||
|
## Provider Details
|
||||||
|
|
||||||
|
### DashScope (Official)
|
||||||
|
|
||||||
|
- **Cost:** Free
|
||||||
|
- **Authentication:** Automatically available when using Qwen OAuth authentication
|
||||||
|
- **Configuration:** No API key required, automatically added to provider list for Qwen OAuth users
|
||||||
|
- **Quota:** 200 requests/minute, 2000 requests/day
|
||||||
|
- **Best for:** General queries, always available as fallback for Qwen OAuth users
|
||||||
|
- **Auto-registration:** If you're using Qwen OAuth, DashScope is automatically added to your provider list even if you don't configure it explicitly
|
||||||
|
|
||||||
|
### Tavily
|
||||||
|
|
||||||
|
- **Cost:** Requires API key (paid service with free tier)
|
||||||
|
- **Sign up:** https://tavily.com
|
||||||
|
- **Features:** High-quality results with AI-generated answers
|
||||||
|
- **Best for:** Research, comprehensive answers with citations
|
||||||
|
|
||||||
|
### Google Custom Search
|
||||||
|
|
||||||
|
- **Cost:** Free tier available (100 queries/day)
|
||||||
|
- **Setup:**
|
||||||
|
1. Enable Custom Search API in Google Cloud Console
|
||||||
|
2. Create a Custom Search Engine at https://programmablesearchengine.google.com
|
||||||
|
- **Features:** Google's search quality
|
||||||
|
- **Best for:** Specific, factual queries
|
||||||
|
|
||||||
|
## Important Notes
|
||||||
|
|
||||||
|
- **Response format:** Returns a concise answer with numbered source citations
|
||||||
|
- **Citations:** Source links are appended as a numbered list: [1], [2], etc.
|
||||||
|
- **Multiple providers:** If one provider fails, manually specify another using the `provider` parameter
|
||||||
|
- **DashScope availability:** Automatically available for Qwen OAuth users, no configuration needed
|
||||||
|
- **Default provider selection:** The system automatically selects a default provider based on availability:
|
||||||
|
1. Your explicit `default` configuration (highest priority)
|
||||||
|
2. CLI argument `--web-search-default`
|
||||||
|
3. First available provider by priority: Tavily > Google > DashScope
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
**Tool not available?**
|
||||||
|
|
||||||
|
- **For Qwen OAuth users:** The tool is automatically registered with DashScope provider, no configuration needed
|
||||||
|
- **For other authentication types:** Ensure at least one provider (Tavily or Google) is configured
|
||||||
|
- For Tavily/Google: Verify your API keys are correct
|
||||||
|
|
||||||
|
**Provider-specific errors?**
|
||||||
|
|
||||||
|
- Use the `provider` parameter to try a different search provider
|
||||||
|
- Check your API quotas and rate limits
|
||||||
|
- Verify API keys are properly set in configuration
|
||||||
|
|
||||||
|
**Need help?**
|
||||||
|
|
||||||
|
- Check your configuration: Run `qwen` and use the settings dialog
|
||||||
|
- View your current settings in `~/.qwen-code/settings.json` (macOS/Linux) or `%USERPROFILE%\.qwen-code\settings.json` (Windows)
|
||||||
|
|||||||
@@ -7,7 +7,7 @@
|
|||||||
import path from 'node:path';
|
import path from 'node:path';
|
||||||
import { fileURLToPath } from 'node:url';
|
import { fileURLToPath } from 'node:url';
|
||||||
import { createRequire } from 'node:module';
|
import { createRequire } from 'node:module';
|
||||||
import { writeFileSync } from 'node:fs';
|
import { writeFileSync, rmSync } from 'node:fs';
|
||||||
|
|
||||||
let esbuild;
|
let esbuild;
|
||||||
try {
|
try {
|
||||||
@@ -22,6 +22,9 @@ const __dirname = path.dirname(__filename);
|
|||||||
const require = createRequire(import.meta.url);
|
const require = createRequire(import.meta.url);
|
||||||
const pkg = require(path.resolve(__dirname, 'package.json'));
|
const pkg = require(path.resolve(__dirname, 'package.json'));
|
||||||
|
|
||||||
|
// Clean dist directory (cross-platform)
|
||||||
|
rmSync(path.resolve(__dirname, 'dist'), { recursive: true, force: true });
|
||||||
|
|
||||||
const external = [
|
const external = [
|
||||||
'@lydell/node-pty',
|
'@lydell/node-pty',
|
||||||
'node-pty',
|
'node-pty',
|
||||||
|
|||||||
@@ -36,10 +36,10 @@ describe('JSON output', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should return a JSON error for enforced auth mismatch before running', async () => {
|
it('should return a JSON error for enforced auth mismatch before running', async () => {
|
||||||
process.env['GOOGLE_GENAI_USE_GCA'] = 'true';
|
process.env['OPENAI_API_KEY'] = 'test-key';
|
||||||
await rig.setup('json-output-auth-mismatch', {
|
await rig.setup('json-output-auth-mismatch', {
|
||||||
settings: {
|
settings: {
|
||||||
security: { auth: { enforcedType: 'gemini-api-key' } },
|
security: { auth: { enforcedType: 'qwen-oauth' } },
|
||||||
},
|
},
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -50,7 +50,7 @@ describe('JSON output', () => {
|
|||||||
} catch (e) {
|
} catch (e) {
|
||||||
thrown = e as Error;
|
thrown = e as Error;
|
||||||
} finally {
|
} finally {
|
||||||
delete process.env['GOOGLE_GENAI_USE_GCA'];
|
delete process.env['OPENAI_API_KEY'];
|
||||||
}
|
}
|
||||||
|
|
||||||
expect(thrown).toBeDefined();
|
expect(thrown).toBeDefined();
|
||||||
@@ -80,10 +80,8 @@ describe('JSON output', () => {
|
|||||||
expect(payload.error.type).toBe('Error');
|
expect(payload.error.type).toBe('Error');
|
||||||
expect(payload.error.code).toBe(1);
|
expect(payload.error.code).toBe(1);
|
||||||
expect(payload.error.message).toContain(
|
expect(payload.error.message).toContain(
|
||||||
'configured auth type is gemini-api-key',
|
'configured auth type is qwen-oauth',
|
||||||
);
|
|
||||||
expect(payload.error.message).toContain(
|
|
||||||
'current auth type is oauth-personal',
|
|
||||||
);
|
);
|
||||||
|
expect(payload.error.message).toContain('current auth type is openai');
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -9,7 +9,6 @@ import { mkdirSync, writeFileSync, readFileSync } from 'node:fs';
|
|||||||
import { join, dirname } from 'node:path';
|
import { join, dirname } from 'node:path';
|
||||||
import { fileURLToPath } from 'node:url';
|
import { fileURLToPath } from 'node:url';
|
||||||
import { env } from 'node:process';
|
import { env } from 'node:process';
|
||||||
import { DEFAULT_QWEN_MODEL } from '../packages/core/src/config/models.js';
|
|
||||||
import fs from 'node:fs';
|
import fs from 'node:fs';
|
||||||
import { EOL } from 'node:os';
|
import { EOL } from 'node:os';
|
||||||
import * as pty from '@lydell/node-pty';
|
import * as pty from '@lydell/node-pty';
|
||||||
@@ -182,7 +181,6 @@ export class TestRig {
|
|||||||
otlpEndpoint: '',
|
otlpEndpoint: '',
|
||||||
outfile: telemetryPath,
|
outfile: telemetryPath,
|
||||||
},
|
},
|
||||||
model: DEFAULT_QWEN_MODEL,
|
|
||||||
sandbox: env.GEMINI_SANDBOX !== 'false' ? env.GEMINI_SANDBOX : false,
|
sandbox: env.GEMINI_SANDBOX !== 'false' ? env.GEMINI_SANDBOX : false,
|
||||||
...options.settings, // Allow tests to override/add settings
|
...options.settings, // Allow tests to override/add settings
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -9,14 +9,53 @@ import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js';
|
|||||||
|
|
||||||
describe('web_search', () => {
|
describe('web_search', () => {
|
||||||
it('should be able to search the web', async () => {
|
it('should be able to search the web', async () => {
|
||||||
// Skip if Tavily key is not configured
|
// Check if any web search provider is available
|
||||||
if (!process.env['TAVILY_API_KEY']) {
|
const hasTavilyKey = !!process.env['TAVILY_API_KEY'];
|
||||||
console.warn('Skipping web search test: TAVILY_API_KEY not set');
|
const hasGoogleKey =
|
||||||
|
!!process.env['GOOGLE_API_KEY'] &&
|
||||||
|
!!process.env['GOOGLE_SEARCH_ENGINE_ID'];
|
||||||
|
|
||||||
|
// Skip if no provider is configured
|
||||||
|
// Note: DashScope provider is automatically available for Qwen OAuth users,
|
||||||
|
// but we can't easily detect that in tests without actual OAuth credentials
|
||||||
|
if (!hasTavilyKey && !hasGoogleKey) {
|
||||||
|
console.warn(
|
||||||
|
'Skipping web search test: No web search provider configured. ' +
|
||||||
|
'Set TAVILY_API_KEY or GOOGLE_API_KEY+GOOGLE_SEARCH_ENGINE_ID environment variables.',
|
||||||
|
);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
const rig = new TestRig();
|
const rig = new TestRig();
|
||||||
await rig.setup('should be able to search the web');
|
// Configure web search in settings if provider keys are available
|
||||||
|
const webSearchSettings: Record<string, unknown> = {};
|
||||||
|
const providers: Array<{
|
||||||
|
type: string;
|
||||||
|
apiKey?: string;
|
||||||
|
searchEngineId?: string;
|
||||||
|
}> = [];
|
||||||
|
|
||||||
|
if (hasTavilyKey) {
|
||||||
|
providers.push({ type: 'tavily', apiKey: process.env['TAVILY_API_KEY'] });
|
||||||
|
}
|
||||||
|
if (hasGoogleKey) {
|
||||||
|
providers.push({
|
||||||
|
type: 'google',
|
||||||
|
apiKey: process.env['GOOGLE_API_KEY'],
|
||||||
|
searchEngineId: process.env['GOOGLE_SEARCH_ENGINE_ID'],
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (providers.length > 0) {
|
||||||
|
webSearchSettings.webSearch = {
|
||||||
|
provider: providers,
|
||||||
|
default: providers[0]?.type,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
await rig.setup('should be able to search the web', {
|
||||||
|
settings: webSearchSettings,
|
||||||
|
});
|
||||||
|
|
||||||
let result;
|
let result;
|
||||||
try {
|
try {
|
||||||
|
|||||||
12
package-lock.json
generated
12
package-lock.json
generated
@@ -1,12 +1,12 @@
|
|||||||
{
|
{
|
||||||
"name": "@qwen-code/qwen-code",
|
"name": "@qwen-code/qwen-code",
|
||||||
"version": "0.1.0",
|
"version": "0.2.1",
|
||||||
"lockfileVersion": 3,
|
"lockfileVersion": 3,
|
||||||
"requires": true,
|
"requires": true,
|
||||||
"packages": {
|
"packages": {
|
||||||
"": {
|
"": {
|
||||||
"name": "@qwen-code/qwen-code",
|
"name": "@qwen-code/qwen-code",
|
||||||
"version": "0.1.0",
|
"version": "0.2.1",
|
||||||
"workspaces": [
|
"workspaces": [
|
||||||
"packages/*"
|
"packages/*"
|
||||||
],
|
],
|
||||||
@@ -16024,7 +16024,7 @@
|
|||||||
},
|
},
|
||||||
"packages/cli": {
|
"packages/cli": {
|
||||||
"name": "@qwen-code/qwen-code",
|
"name": "@qwen-code/qwen-code",
|
||||||
"version": "0.1.0",
|
"version": "0.2.1",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@google/genai": "1.16.0",
|
"@google/genai": "1.16.0",
|
||||||
"@iarna/toml": "^2.2.5",
|
"@iarna/toml": "^2.2.5",
|
||||||
@@ -16139,7 +16139,7 @@
|
|||||||
},
|
},
|
||||||
"packages/core": {
|
"packages/core": {
|
||||||
"name": "@qwen-code/qwen-code-core",
|
"name": "@qwen-code/qwen-code-core",
|
||||||
"version": "0.1.0",
|
"version": "0.2.1",
|
||||||
"hasInstallScript": true,
|
"hasInstallScript": true,
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@google/genai": "1.16.0",
|
"@google/genai": "1.16.0",
|
||||||
@@ -16278,7 +16278,7 @@
|
|||||||
},
|
},
|
||||||
"packages/test-utils": {
|
"packages/test-utils": {
|
||||||
"name": "@qwen-code/qwen-code-test-utils",
|
"name": "@qwen-code/qwen-code-test-utils",
|
||||||
"version": "0.1.0",
|
"version": "0.2.1",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "Apache-2.0",
|
"license": "Apache-2.0",
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
@@ -16290,7 +16290,7 @@
|
|||||||
},
|
},
|
||||||
"packages/vscode-ide-companion": {
|
"packages/vscode-ide-companion": {
|
||||||
"name": "qwen-code-vscode-ide-companion",
|
"name": "qwen-code-vscode-ide-companion",
|
||||||
"version": "0.1.0",
|
"version": "0.2.1",
|
||||||
"license": "LICENSE",
|
"license": "LICENSE",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@modelcontextprotocol/sdk": "^1.15.1",
|
"@modelcontextprotocol/sdk": "^1.15.1",
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "@qwen-code/qwen-code",
|
"name": "@qwen-code/qwen-code",
|
||||||
"version": "0.1.0",
|
"version": "0.2.1",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=20.0.0"
|
"node": ">=20.0.0"
|
||||||
},
|
},
|
||||||
@@ -13,7 +13,7 @@
|
|||||||
"url": "git+https://github.com/QwenLM/qwen-code.git"
|
"url": "git+https://github.com/QwenLM/qwen-code.git"
|
||||||
},
|
},
|
||||||
"config": {
|
"config": {
|
||||||
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.1.0"
|
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.2.1"
|
||||||
},
|
},
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"start": "cross-env node scripts/start.js",
|
"start": "cross-env node scripts/start.js",
|
||||||
@@ -28,7 +28,7 @@
|
|||||||
"build:all": "npm run build && npm run build:sandbox && npm run build:vscode",
|
"build:all": "npm run build && npm run build:sandbox && npm run build:vscode",
|
||||||
"build:packages": "npm run build --workspaces",
|
"build:packages": "npm run build --workspaces",
|
||||||
"build:sandbox": "node scripts/build_sandbox.js",
|
"build:sandbox": "node scripts/build_sandbox.js",
|
||||||
"bundle": "rm -rf dist && npm run generate && node esbuild.config.js && node scripts/copy_bundle_assets.js",
|
"bundle": "npm run generate && node esbuild.config.js && node scripts/copy_bundle_assets.js",
|
||||||
"test": "npm run test --workspaces --if-present --parallel",
|
"test": "npm run test --workspaces --if-present --parallel",
|
||||||
"test:ci": "npm run test:ci --workspaces --if-present --parallel && npm run test:scripts",
|
"test:ci": "npm run test:ci --workspaces --if-present --parallel && npm run test:scripts",
|
||||||
"test:scripts": "vitest run --config ./scripts/tests/vitest.config.ts",
|
"test:scripts": "vitest run --config ./scripts/tests/vitest.config.ts",
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "@qwen-code/qwen-code",
|
"name": "@qwen-code/qwen-code",
|
||||||
"version": "0.1.0",
|
"version": "0.2.1",
|
||||||
"description": "Qwen Code",
|
"description": "Qwen Code",
|
||||||
"repository": {
|
"repository": {
|
||||||
"type": "git",
|
"type": "git",
|
||||||
@@ -25,7 +25,7 @@
|
|||||||
"dist"
|
"dist"
|
||||||
],
|
],
|
||||||
"config": {
|
"config": {
|
||||||
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.1.0"
|
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.2.1"
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@google/genai": "1.16.0",
|
"@google/genai": "1.16.0",
|
||||||
|
|||||||
@@ -18,60 +18,26 @@ vi.mock('./settings.js', () => ({
|
|||||||
describe('validateAuthMethod', () => {
|
describe('validateAuthMethod', () => {
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.resetModules();
|
vi.resetModules();
|
||||||
vi.stubEnv('GEMINI_API_KEY', undefined);
|
|
||||||
vi.stubEnv('GOOGLE_CLOUD_PROJECT', undefined);
|
|
||||||
vi.stubEnv('GOOGLE_CLOUD_LOCATION', undefined);
|
|
||||||
vi.stubEnv('GOOGLE_API_KEY', undefined);
|
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
vi.unstubAllEnvs();
|
vi.unstubAllEnvs();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return null for LOGIN_WITH_GOOGLE', () => {
|
it('should return null for USE_OPENAI', () => {
|
||||||
expect(validateAuthMethod(AuthType.LOGIN_WITH_GOOGLE)).toBeNull();
|
process.env['OPENAI_API_KEY'] = 'fake-key';
|
||||||
|
expect(validateAuthMethod(AuthType.USE_OPENAI)).toBeNull();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return null for CLOUD_SHELL', () => {
|
it('should return an error message for USE_OPENAI if OPENAI_API_KEY is not set', () => {
|
||||||
expect(validateAuthMethod(AuthType.CLOUD_SHELL)).toBeNull();
|
delete process.env['OPENAI_API_KEY'];
|
||||||
|
expect(validateAuthMethod(AuthType.USE_OPENAI)).toBe(
|
||||||
|
'OPENAI_API_KEY environment variable not found. You can enter it interactively or add it to your .env file.',
|
||||||
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('USE_GEMINI', () => {
|
it('should return null for QWEN_OAUTH', () => {
|
||||||
it('should return null if GEMINI_API_KEY is set', () => {
|
expect(validateAuthMethod(AuthType.QWEN_OAUTH)).toBeNull();
|
||||||
vi.stubEnv('GEMINI_API_KEY', 'test-key');
|
|
||||||
expect(validateAuthMethod(AuthType.USE_GEMINI)).toBeNull();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return an error message if GEMINI_API_KEY is not set', () => {
|
|
||||||
vi.stubEnv('GEMINI_API_KEY', undefined);
|
|
||||||
expect(validateAuthMethod(AuthType.USE_GEMINI)).toBe(
|
|
||||||
'GEMINI_API_KEY environment variable not found. Add that to your environment and try again (no reload needed if using .env)!',
|
|
||||||
);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('USE_VERTEX_AI', () => {
|
|
||||||
it('should return null if GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_LOCATION are set', () => {
|
|
||||||
vi.stubEnv('GOOGLE_CLOUD_PROJECT', 'test-project');
|
|
||||||
vi.stubEnv('GOOGLE_CLOUD_LOCATION', 'test-location');
|
|
||||||
expect(validateAuthMethod(AuthType.USE_VERTEX_AI)).toBeNull();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return null if GOOGLE_API_KEY is set', () => {
|
|
||||||
vi.stubEnv('GOOGLE_API_KEY', 'test-api-key');
|
|
||||||
expect(validateAuthMethod(AuthType.USE_VERTEX_AI)).toBeNull();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should return an error message if no required environment variables are set', () => {
|
|
||||||
vi.stubEnv('GOOGLE_CLOUD_PROJECT', undefined);
|
|
||||||
vi.stubEnv('GOOGLE_CLOUD_LOCATION', undefined);
|
|
||||||
expect(validateAuthMethod(AuthType.USE_VERTEX_AI)).toBe(
|
|
||||||
'When using Vertex AI, you must specify either:\n' +
|
|
||||||
'• GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_LOCATION environment variables.\n' +
|
|
||||||
'• GOOGLE_API_KEY environment variable (if using express mode).\n' +
|
|
||||||
'Update your environment and try again (no reload needed if using .env)!',
|
|
||||||
);
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return an error message for an invalid auth method', () => {
|
it('should return an error message for an invalid auth method', () => {
|
||||||
|
|||||||
@@ -8,39 +8,13 @@ import { AuthType } from '@qwen-code/qwen-code-core';
|
|||||||
import { loadEnvironment, loadSettings } from './settings.js';
|
import { loadEnvironment, loadSettings } from './settings.js';
|
||||||
|
|
||||||
export function validateAuthMethod(authMethod: string): string | null {
|
export function validateAuthMethod(authMethod: string): string | null {
|
||||||
loadEnvironment(loadSettings().merged);
|
const settings = loadSettings();
|
||||||
if (
|
loadEnvironment(settings.merged);
|
||||||
authMethod === AuthType.LOGIN_WITH_GOOGLE ||
|
|
||||||
authMethod === AuthType.CLOUD_SHELL
|
|
||||||
) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (authMethod === AuthType.USE_GEMINI) {
|
|
||||||
if (!process.env['GEMINI_API_KEY']) {
|
|
||||||
return 'GEMINI_API_KEY environment variable not found. Add that to your environment and try again (no reload needed if using .env)!';
|
|
||||||
}
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (authMethod === AuthType.USE_VERTEX_AI) {
|
|
||||||
const hasVertexProjectLocationConfig =
|
|
||||||
!!process.env['GOOGLE_CLOUD_PROJECT'] &&
|
|
||||||
!!process.env['GOOGLE_CLOUD_LOCATION'];
|
|
||||||
const hasGoogleApiKey = !!process.env['GOOGLE_API_KEY'];
|
|
||||||
if (!hasVertexProjectLocationConfig && !hasGoogleApiKey) {
|
|
||||||
return (
|
|
||||||
'When using Vertex AI, you must specify either:\n' +
|
|
||||||
'• GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_LOCATION environment variables.\n' +
|
|
||||||
'• GOOGLE_API_KEY environment variable (if using express mode).\n' +
|
|
||||||
'Update your environment and try again (no reload needed if using .env)!'
|
|
||||||
);
|
|
||||||
}
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (authMethod === AuthType.USE_OPENAI) {
|
if (authMethod === AuthType.USE_OPENAI) {
|
||||||
if (!process.env['OPENAI_API_KEY']) {
|
const hasApiKey =
|
||||||
|
process.env['OPENAI_API_KEY'] || settings.merged.security?.auth?.apiKey;
|
||||||
|
if (!hasApiKey) {
|
||||||
return 'OPENAI_API_KEY environment variable not found. You can enter it interactively or add it to your .env file.';
|
return 'OPENAI_API_KEY environment variable not found. You can enter it interactively or add it to your .env file.';
|
||||||
}
|
}
|
||||||
return null;
|
return null;
|
||||||
@@ -54,15 +28,3 @@ export function validateAuthMethod(authMethod: string): string | null {
|
|||||||
|
|
||||||
return 'Invalid auth method selected.';
|
return 'Invalid auth method selected.';
|
||||||
}
|
}
|
||||||
|
|
||||||
export const setOpenAIApiKey = (apiKey: string): void => {
|
|
||||||
process.env['OPENAI_API_KEY'] = apiKey;
|
|
||||||
};
|
|
||||||
|
|
||||||
export const setOpenAIBaseUrl = (baseUrl: string): void => {
|
|
||||||
process.env['OPENAI_BASE_URL'] = baseUrl;
|
|
||||||
};
|
|
||||||
|
|
||||||
export const setOpenAIModel = (model: string): void => {
|
|
||||||
process.env['OPENAI_MODEL'] = model;
|
|
||||||
};
|
|
||||||
|
|||||||
@@ -2399,6 +2399,73 @@ describe('loadCliConfig useRipgrep', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
|
describe('loadCliConfig useBuiltinRipgrep', () => {
|
||||||
|
const originalArgv = process.argv;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.resetAllMocks();
|
||||||
|
vi.mocked(os.homedir).mockReturnValue('/mock/home/user');
|
||||||
|
vi.stubEnv('GEMINI_API_KEY', 'test-api-key');
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
process.argv = originalArgv;
|
||||||
|
vi.unstubAllEnvs();
|
||||||
|
vi.restoreAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should be true by default when useBuiltinRipgrep is not set in settings', async () => {
|
||||||
|
process.argv = ['node', 'script.js'];
|
||||||
|
const argv = await parseArguments({} as Settings);
|
||||||
|
const settings: Settings = {};
|
||||||
|
const config = await loadCliConfig(
|
||||||
|
settings,
|
||||||
|
[],
|
||||||
|
new ExtensionEnablementManager(
|
||||||
|
ExtensionStorage.getUserExtensionsDir(),
|
||||||
|
argv.extensions,
|
||||||
|
),
|
||||||
|
'test-session',
|
||||||
|
argv,
|
||||||
|
);
|
||||||
|
expect(config.getUseBuiltinRipgrep()).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should be false when useBuiltinRipgrep is set to false in settings', async () => {
|
||||||
|
process.argv = ['node', 'script.js'];
|
||||||
|
const argv = await parseArguments({} as Settings);
|
||||||
|
const settings: Settings = { tools: { useBuiltinRipgrep: false } };
|
||||||
|
const config = await loadCliConfig(
|
||||||
|
settings,
|
||||||
|
[],
|
||||||
|
new ExtensionEnablementManager(
|
||||||
|
ExtensionStorage.getUserExtensionsDir(),
|
||||||
|
argv.extensions,
|
||||||
|
),
|
||||||
|
'test-session',
|
||||||
|
argv,
|
||||||
|
);
|
||||||
|
expect(config.getUseBuiltinRipgrep()).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should be true when useBuiltinRipgrep is explicitly set to true in settings', async () => {
|
||||||
|
process.argv = ['node', 'script.js'];
|
||||||
|
const argv = await parseArguments({} as Settings);
|
||||||
|
const settings: Settings = { tools: { useBuiltinRipgrep: true } };
|
||||||
|
const config = await loadCliConfig(
|
||||||
|
settings,
|
||||||
|
[],
|
||||||
|
new ExtensionEnablementManager(
|
||||||
|
ExtensionStorage.getUserExtensionsDir(),
|
||||||
|
argv.extensions,
|
||||||
|
),
|
||||||
|
'test-session',
|
||||||
|
argv,
|
||||||
|
);
|
||||||
|
expect(config.getUseBuiltinRipgrep()).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
describe('screenReader configuration', () => {
|
describe('screenReader configuration', () => {
|
||||||
const originalArgv = process.argv;
|
const originalArgv = process.argv;
|
||||||
|
|
||||||
|
|||||||
@@ -13,7 +13,6 @@ import { extensionsCommand } from '../commands/extensions.js';
|
|||||||
import {
|
import {
|
||||||
ApprovalMode,
|
ApprovalMode,
|
||||||
Config,
|
Config,
|
||||||
DEFAULT_QWEN_MODEL,
|
|
||||||
DEFAULT_QWEN_EMBEDDING_MODEL,
|
DEFAULT_QWEN_EMBEDDING_MODEL,
|
||||||
DEFAULT_MEMORY_FILE_FILTERING_OPTIONS,
|
DEFAULT_MEMORY_FILE_FILTERING_OPTIONS,
|
||||||
EditTool,
|
EditTool,
|
||||||
@@ -43,6 +42,7 @@ import { mcpCommand } from '../commands/mcp.js';
|
|||||||
|
|
||||||
import { isWorkspaceTrusted } from './trustedFolders.js';
|
import { isWorkspaceTrusted } from './trustedFolders.js';
|
||||||
import type { ExtensionEnablementManager } from './extensions/extensionEnablement.js';
|
import type { ExtensionEnablementManager } from './extensions/extensionEnablement.js';
|
||||||
|
import { buildWebSearchConfig } from './webSearch.js';
|
||||||
|
|
||||||
// Simple console logger for now - replace with actual logger if available
|
// Simple console logger for now - replace with actual logger if available
|
||||||
const logger = {
|
const logger = {
|
||||||
@@ -114,9 +114,13 @@ export interface CliArgs {
|
|||||||
openaiLogging: boolean | undefined;
|
openaiLogging: boolean | undefined;
|
||||||
openaiApiKey: string | undefined;
|
openaiApiKey: string | undefined;
|
||||||
openaiBaseUrl: string | undefined;
|
openaiBaseUrl: string | undefined;
|
||||||
|
openaiLoggingDir: string | undefined;
|
||||||
proxy: string | undefined;
|
proxy: string | undefined;
|
||||||
includeDirectories: string[] | undefined;
|
includeDirectories: string[] | undefined;
|
||||||
tavilyApiKey: string | undefined;
|
tavilyApiKey: string | undefined;
|
||||||
|
googleApiKey: string | undefined;
|
||||||
|
googleSearchEngineId: string | undefined;
|
||||||
|
webSearchDefault: string | undefined;
|
||||||
screenReader: boolean | undefined;
|
screenReader: boolean | undefined;
|
||||||
vlmSwitchMode: string | undefined;
|
vlmSwitchMode: string | undefined;
|
||||||
useSmartEdit: boolean | undefined;
|
useSmartEdit: boolean | undefined;
|
||||||
@@ -194,14 +198,13 @@ export async function parseArguments(settings: Settings): Promise<CliArgs> {
|
|||||||
})
|
})
|
||||||
.option('proxy', {
|
.option('proxy', {
|
||||||
type: 'string',
|
type: 'string',
|
||||||
description:
|
description: 'Proxy for Qwen Code, like schema://user:password@host:port',
|
||||||
'Proxy for gemini client, like schema://user:password@host:port',
|
|
||||||
})
|
})
|
||||||
.deprecateOption(
|
.deprecateOption(
|
||||||
'proxy',
|
'proxy',
|
||||||
'Use the "proxy" setting in settings.json instead. This flag will be removed in a future version.',
|
'Use the "proxy" setting in settings.json instead. This flag will be removed in a future version.',
|
||||||
)
|
)
|
||||||
.command('$0 [query..]', 'Launch Gemini CLI', (yargsInstance: Argv) =>
|
.command('$0 [query..]', 'Launch Qwen Code CLI', (yargsInstance: Argv) =>
|
||||||
yargsInstance
|
yargsInstance
|
||||||
.positional('query', {
|
.positional('query', {
|
||||||
description:
|
description:
|
||||||
@@ -315,6 +318,11 @@ export async function parseArguments(settings: Settings): Promise<CliArgs> {
|
|||||||
description:
|
description:
|
||||||
'Enable logging of OpenAI API calls for debugging and analysis',
|
'Enable logging of OpenAI API calls for debugging and analysis',
|
||||||
})
|
})
|
||||||
|
.option('openai-logging-dir', {
|
||||||
|
type: 'string',
|
||||||
|
description:
|
||||||
|
'Custom directory path for OpenAI API logs. Overrides settings files.',
|
||||||
|
})
|
||||||
.option('openai-api-key', {
|
.option('openai-api-key', {
|
||||||
type: 'string',
|
type: 'string',
|
||||||
description: 'OpenAI API key to use for authentication',
|
description: 'OpenAI API key to use for authentication',
|
||||||
@@ -325,7 +333,20 @@ export async function parseArguments(settings: Settings): Promise<CliArgs> {
|
|||||||
})
|
})
|
||||||
.option('tavily-api-key', {
|
.option('tavily-api-key', {
|
||||||
type: 'string',
|
type: 'string',
|
||||||
description: 'Tavily API key for web search functionality',
|
description: 'Tavily API key for web search',
|
||||||
|
})
|
||||||
|
.option('google-api-key', {
|
||||||
|
type: 'string',
|
||||||
|
description: 'Google Custom Search API key',
|
||||||
|
})
|
||||||
|
.option('google-search-engine-id', {
|
||||||
|
type: 'string',
|
||||||
|
description: 'Google Custom Search Engine ID',
|
||||||
|
})
|
||||||
|
.option('web-search-default', {
|
||||||
|
type: 'string',
|
||||||
|
description:
|
||||||
|
'Default web search provider (dashscope, tavily, google)',
|
||||||
})
|
})
|
||||||
.option('screen-reader', {
|
.option('screen-reader', {
|
||||||
type: 'boolean',
|
type: 'boolean',
|
||||||
@@ -669,13 +690,11 @@ export async function loadCliConfig(
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
const defaultModel = DEFAULT_QWEN_MODEL;
|
const resolvedModel =
|
||||||
const resolvedModel: string =
|
|
||||||
argv.model ||
|
argv.model ||
|
||||||
process.env['OPENAI_MODEL'] ||
|
process.env['OPENAI_MODEL'] ||
|
||||||
process.env['QWEN_MODEL'] ||
|
process.env['QWEN_MODEL'] ||
|
||||||
settings.model?.name ||
|
settings.model?.name;
|
||||||
defaultModel;
|
|
||||||
|
|
||||||
const sandboxConfig = await loadSandboxConfig(settings, argv);
|
const sandboxConfig = await loadSandboxConfig(settings, argv);
|
||||||
const screenReader =
|
const screenReader =
|
||||||
@@ -739,18 +758,27 @@ export async function loadCliConfig(
|
|||||||
generationConfig: {
|
generationConfig: {
|
||||||
...(settings.model?.generationConfig || {}),
|
...(settings.model?.generationConfig || {}),
|
||||||
model: resolvedModel,
|
model: resolvedModel,
|
||||||
apiKey: argv.openaiApiKey || process.env['OPENAI_API_KEY'],
|
apiKey:
|
||||||
baseUrl: argv.openaiBaseUrl || process.env['OPENAI_BASE_URL'],
|
argv.openaiApiKey ||
|
||||||
|
process.env['OPENAI_API_KEY'] ||
|
||||||
|
settings.security?.auth?.apiKey,
|
||||||
|
baseUrl:
|
||||||
|
argv.openaiBaseUrl ||
|
||||||
|
process.env['OPENAI_BASE_URL'] ||
|
||||||
|
settings.security?.auth?.baseUrl,
|
||||||
enableOpenAILogging:
|
enableOpenAILogging:
|
||||||
(typeof argv.openaiLogging === 'undefined'
|
(typeof argv.openaiLogging === 'undefined'
|
||||||
? settings.model?.enableOpenAILogging
|
? settings.model?.enableOpenAILogging
|
||||||
: argv.openaiLogging) ?? false,
|
: argv.openaiLogging) ?? false,
|
||||||
|
openAILoggingDir:
|
||||||
|
argv.openaiLoggingDir || settings.model?.openAILoggingDir,
|
||||||
},
|
},
|
||||||
cliVersion: await getCliVersion(),
|
cliVersion: await getCliVersion(),
|
||||||
tavilyApiKey:
|
webSearch: buildWebSearchConfig(
|
||||||
argv.tavilyApiKey ||
|
argv,
|
||||||
settings.advanced?.tavilyApiKey ||
|
settings,
|
||||||
process.env['TAVILY_API_KEY'],
|
settings.security?.auth?.selectedType,
|
||||||
|
),
|
||||||
summarizeToolOutput: settings.model?.summarizeToolOutput,
|
summarizeToolOutput: settings.model?.summarizeToolOutput,
|
||||||
ideMode,
|
ideMode,
|
||||||
chatCompression: settings.model?.chatCompression,
|
chatCompression: settings.model?.chatCompression,
|
||||||
@@ -758,10 +786,12 @@ export async function loadCliConfig(
|
|||||||
interactive,
|
interactive,
|
||||||
trustedFolder,
|
trustedFolder,
|
||||||
useRipgrep: settings.tools?.useRipgrep,
|
useRipgrep: settings.tools?.useRipgrep,
|
||||||
|
useBuiltinRipgrep: settings.tools?.useBuiltinRipgrep,
|
||||||
shouldUseNodePtyShell: settings.tools?.shell?.enableInteractiveShell,
|
shouldUseNodePtyShell: settings.tools?.shell?.enableInteractiveShell,
|
||||||
skipNextSpeakerCheck: settings.model?.skipNextSpeakerCheck,
|
skipNextSpeakerCheck: settings.model?.skipNextSpeakerCheck,
|
||||||
enablePromptCompletion: settings.general?.enablePromptCompletion ?? false,
|
enablePromptCompletion: settings.general?.enablePromptCompletion ?? false,
|
||||||
skipLoopDetection: settings.model?.skipLoopDetection ?? false,
|
skipLoopDetection: settings.model?.skipLoopDetection ?? false,
|
||||||
|
skipStartupContext: settings.model?.skipStartupContext ?? false,
|
||||||
vlmSwitchMode,
|
vlmSwitchMode,
|
||||||
truncateToolOutputThreshold: settings.tools?.truncateToolOutputThreshold,
|
truncateToolOutputThreshold: settings.tools?.truncateToolOutputThreshold,
|
||||||
truncateToolOutputLines: settings.tools?.truncateToolOutputLines,
|
truncateToolOutputLines: settings.tools?.truncateToolOutputLines,
|
||||||
|
|||||||
@@ -66,6 +66,8 @@ import {
|
|||||||
loadEnvironment,
|
loadEnvironment,
|
||||||
migrateDeprecatedSettings,
|
migrateDeprecatedSettings,
|
||||||
SettingScope,
|
SettingScope,
|
||||||
|
SETTINGS_VERSION,
|
||||||
|
SETTINGS_VERSION_KEY,
|
||||||
} from './settings.js';
|
} from './settings.js';
|
||||||
import { FatalConfigError, QWEN_DIR } from '@qwen-code/qwen-code-core';
|
import { FatalConfigError, QWEN_DIR } from '@qwen-code/qwen-code-core';
|
||||||
|
|
||||||
@@ -94,6 +96,7 @@ vi.mock('fs', async (importOriginal) => {
|
|||||||
existsSync: vi.fn(),
|
existsSync: vi.fn(),
|
||||||
readFileSync: vi.fn(),
|
readFileSync: vi.fn(),
|
||||||
writeFileSync: vi.fn(),
|
writeFileSync: vi.fn(),
|
||||||
|
renameSync: vi.fn(),
|
||||||
mkdirSync: vi.fn(),
|
mkdirSync: vi.fn(),
|
||||||
realpathSync: (p: string) => p,
|
realpathSync: (p: string) => p,
|
||||||
};
|
};
|
||||||
@@ -171,11 +174,15 @@ describe('Settings Loading and Merging', () => {
|
|||||||
getSystemSettingsPath(),
|
getSystemSettingsPath(),
|
||||||
'utf-8',
|
'utf-8',
|
||||||
);
|
);
|
||||||
expect(settings.system.settings).toEqual(systemSettingsContent);
|
expect(settings.system.settings).toEqual({
|
||||||
|
...systemSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
});
|
||||||
expect(settings.user.settings).toEqual({});
|
expect(settings.user.settings).toEqual({});
|
||||||
expect(settings.workspace.settings).toEqual({});
|
expect(settings.workspace.settings).toEqual({});
|
||||||
expect(settings.merged).toEqual({
|
expect(settings.merged).toEqual({
|
||||||
...systemSettingsContent,
|
...systemSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -207,10 +214,14 @@ describe('Settings Loading and Merging', () => {
|
|||||||
expectedUserSettingsPath,
|
expectedUserSettingsPath,
|
||||||
'utf-8',
|
'utf-8',
|
||||||
);
|
);
|
||||||
expect(settings.user.settings).toEqual(userSettingsContent);
|
expect(settings.user.settings).toEqual({
|
||||||
|
...userSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
});
|
||||||
expect(settings.workspace.settings).toEqual({});
|
expect(settings.workspace.settings).toEqual({});
|
||||||
expect(settings.merged).toEqual({
|
expect(settings.merged).toEqual({
|
||||||
...userSettingsContent,
|
...userSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -241,9 +252,13 @@ describe('Settings Loading and Merging', () => {
|
|||||||
'utf-8',
|
'utf-8',
|
||||||
);
|
);
|
||||||
expect(settings.user.settings).toEqual({});
|
expect(settings.user.settings).toEqual({});
|
||||||
expect(settings.workspace.settings).toEqual(workspaceSettingsContent);
|
expect(settings.workspace.settings).toEqual({
|
||||||
|
...workspaceSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
});
|
||||||
expect(settings.merged).toEqual({
|
expect(settings.merged).toEqual({
|
||||||
...workspaceSettingsContent,
|
...workspaceSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -304,10 +319,20 @@ describe('Settings Loading and Merging', () => {
|
|||||||
|
|
||||||
const settings = loadSettings(MOCK_WORKSPACE_DIR);
|
const settings = loadSettings(MOCK_WORKSPACE_DIR);
|
||||||
|
|
||||||
expect(settings.system.settings).toEqual(systemSettingsContent);
|
expect(settings.system.settings).toEqual({
|
||||||
expect(settings.user.settings).toEqual(userSettingsContent);
|
...systemSettingsContent,
|
||||||
expect(settings.workspace.settings).toEqual(workspaceSettingsContent);
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
});
|
||||||
|
expect(settings.user.settings).toEqual({
|
||||||
|
...userSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
});
|
||||||
|
expect(settings.workspace.settings).toEqual({
|
||||||
|
...workspaceSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
});
|
||||||
expect(settings.merged).toEqual({
|
expect(settings.merged).toEqual({
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
ui: {
|
ui: {
|
||||||
theme: 'system-theme',
|
theme: 'system-theme',
|
||||||
},
|
},
|
||||||
@@ -361,6 +386,7 @@ describe('Settings Loading and Merging', () => {
|
|||||||
const settings = loadSettings(MOCK_WORKSPACE_DIR);
|
const settings = loadSettings(MOCK_WORKSPACE_DIR);
|
||||||
|
|
||||||
expect(settings.merged).toEqual({
|
expect(settings.merged).toEqual({
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
ui: {
|
ui: {
|
||||||
theme: 'legacy-dark',
|
theme: 'legacy-dark',
|
||||||
},
|
},
|
||||||
@@ -413,6 +439,132 @@ describe('Settings Loading and Merging', () => {
|
|||||||
expect((settings.merged as TestSettings)['allowedTools']).toBeUndefined();
|
expect((settings.merged as TestSettings)['allowedTools']).toBeUndefined();
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should add version field to migrated settings file', () => {
|
||||||
|
(mockFsExistsSync as Mock).mockImplementation(
|
||||||
|
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
|
||||||
|
);
|
||||||
|
const legacySettingsContent = {
|
||||||
|
theme: 'dark',
|
||||||
|
model: 'qwen-coder',
|
||||||
|
};
|
||||||
|
(fs.readFileSync as Mock).mockImplementation(
|
||||||
|
(p: fs.PathOrFileDescriptor) => {
|
||||||
|
if (p === USER_SETTINGS_PATH)
|
||||||
|
return JSON.stringify(legacySettingsContent);
|
||||||
|
return '{}';
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
loadSettings(MOCK_WORKSPACE_DIR);
|
||||||
|
|
||||||
|
// Verify that fs.writeFileSync was called with migrated settings including version
|
||||||
|
expect(fs.writeFileSync).toHaveBeenCalled();
|
||||||
|
const writeCall = (fs.writeFileSync as Mock).mock.calls[0];
|
||||||
|
const writtenContent = JSON.parse(writeCall[1] as string);
|
||||||
|
expect(writtenContent[SETTINGS_VERSION_KEY]).toBe(SETTINGS_VERSION);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not re-migrate settings that have version field', () => {
|
||||||
|
(mockFsExistsSync as Mock).mockImplementation(
|
||||||
|
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
|
||||||
|
);
|
||||||
|
const migratedSettingsContent = {
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
ui: {
|
||||||
|
theme: 'dark',
|
||||||
|
},
|
||||||
|
model: {
|
||||||
|
name: 'qwen-coder',
|
||||||
|
},
|
||||||
|
};
|
||||||
|
(fs.readFileSync as Mock).mockImplementation(
|
||||||
|
(p: fs.PathOrFileDescriptor) => {
|
||||||
|
if (p === USER_SETTINGS_PATH)
|
||||||
|
return JSON.stringify(migratedSettingsContent);
|
||||||
|
return '{}';
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
loadSettings(MOCK_WORKSPACE_DIR);
|
||||||
|
|
||||||
|
// Verify that fs.renameSync and fs.writeFileSync were NOT called
|
||||||
|
// (because no migration was needed)
|
||||||
|
expect(fs.renameSync).not.toHaveBeenCalled();
|
||||||
|
expect(fs.writeFileSync).not.toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should add version field to V2 settings without version and write to disk', () => {
|
||||||
|
(mockFsExistsSync as Mock).mockImplementation(
|
||||||
|
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
|
||||||
|
);
|
||||||
|
// V2 format but no version field
|
||||||
|
const v2SettingsWithoutVersion = {
|
||||||
|
ui: {
|
||||||
|
theme: 'dark',
|
||||||
|
},
|
||||||
|
model: {
|
||||||
|
name: 'qwen-coder',
|
||||||
|
},
|
||||||
|
};
|
||||||
|
(fs.readFileSync as Mock).mockImplementation(
|
||||||
|
(p: fs.PathOrFileDescriptor) => {
|
||||||
|
if (p === USER_SETTINGS_PATH)
|
||||||
|
return JSON.stringify(v2SettingsWithoutVersion);
|
||||||
|
return '{}';
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
loadSettings(MOCK_WORKSPACE_DIR);
|
||||||
|
|
||||||
|
// Verify that fs.writeFileSync was called (to add version)
|
||||||
|
// but NOT fs.renameSync (no backup needed, just adding version)
|
||||||
|
expect(fs.renameSync).not.toHaveBeenCalled();
|
||||||
|
expect(fs.writeFileSync).toHaveBeenCalledTimes(1);
|
||||||
|
|
||||||
|
const writeCall = (fs.writeFileSync as Mock).mock.calls[0];
|
||||||
|
const writtenPath = writeCall[0];
|
||||||
|
const writtenContent = JSON.parse(writeCall[1] as string);
|
||||||
|
|
||||||
|
expect(writtenPath).toBe(USER_SETTINGS_PATH);
|
||||||
|
expect(writtenContent[SETTINGS_VERSION_KEY]).toBe(SETTINGS_VERSION);
|
||||||
|
expect(writtenContent.ui?.theme).toBe('dark');
|
||||||
|
expect(writtenContent.model?.name).toBe('qwen-coder');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should correctly handle partially migrated settings without version field', () => {
|
||||||
|
(mockFsExistsSync as Mock).mockImplementation(
|
||||||
|
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
|
||||||
|
);
|
||||||
|
// Edge case: model already in V2 format (object), but autoAccept in V1 format
|
||||||
|
const partiallyMigratedContent = {
|
||||||
|
model: {
|
||||||
|
name: 'qwen-coder',
|
||||||
|
},
|
||||||
|
autoAccept: false, // V1 key
|
||||||
|
};
|
||||||
|
(fs.readFileSync as Mock).mockImplementation(
|
||||||
|
(p: fs.PathOrFileDescriptor) => {
|
||||||
|
if (p === USER_SETTINGS_PATH)
|
||||||
|
return JSON.stringify(partiallyMigratedContent);
|
||||||
|
return '{}';
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
loadSettings(MOCK_WORKSPACE_DIR);
|
||||||
|
|
||||||
|
// Verify that the migrated settings preserve the model object correctly
|
||||||
|
expect(fs.writeFileSync).toHaveBeenCalled();
|
||||||
|
const writeCall = (fs.writeFileSync as Mock).mock.calls[0];
|
||||||
|
const writtenContent = JSON.parse(writeCall[1] as string);
|
||||||
|
|
||||||
|
// Model should remain as an object, not double-nested
|
||||||
|
expect(writtenContent.model).toEqual({ name: 'qwen-coder' });
|
||||||
|
// autoAccept should be migrated to tools.autoAccept
|
||||||
|
expect(writtenContent.tools?.autoAccept).toBe(false);
|
||||||
|
// Version field should be added
|
||||||
|
expect(writtenContent[SETTINGS_VERSION_KEY]).toBe(SETTINGS_VERSION);
|
||||||
|
});
|
||||||
|
|
||||||
it('should correctly merge and migrate legacy array properties from multiple scopes', () => {
|
it('should correctly merge and migrate legacy array properties from multiple scopes', () => {
|
||||||
(mockFsExistsSync as Mock).mockReturnValue(true);
|
(mockFsExistsSync as Mock).mockReturnValue(true);
|
||||||
const legacyUserSettings = {
|
const legacyUserSettings = {
|
||||||
@@ -515,11 +667,24 @@ describe('Settings Loading and Merging', () => {
|
|||||||
|
|
||||||
const settings = loadSettings(MOCK_WORKSPACE_DIR);
|
const settings = loadSettings(MOCK_WORKSPACE_DIR);
|
||||||
|
|
||||||
expect(settings.systemDefaults.settings).toEqual(systemDefaultsContent);
|
expect(settings.systemDefaults.settings).toEqual({
|
||||||
expect(settings.system.settings).toEqual(systemSettingsContent);
|
...systemDefaultsContent,
|
||||||
expect(settings.user.settings).toEqual(userSettingsContent);
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
expect(settings.workspace.settings).toEqual(workspaceSettingsContent);
|
});
|
||||||
|
expect(settings.system.settings).toEqual({
|
||||||
|
...systemSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
});
|
||||||
|
expect(settings.user.settings).toEqual({
|
||||||
|
...userSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
});
|
||||||
|
expect(settings.workspace.settings).toEqual({
|
||||||
|
...workspaceSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
});
|
||||||
expect(settings.merged).toEqual({
|
expect(settings.merged).toEqual({
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
context: {
|
context: {
|
||||||
fileName: 'WORKSPACE_CONTEXT.md',
|
fileName: 'WORKSPACE_CONTEXT.md',
|
||||||
includeDirectories: [
|
includeDirectories: [
|
||||||
@@ -866,8 +1031,14 @@ describe('Settings Loading and Merging', () => {
|
|||||||
|
|
||||||
const settings = loadSettings(MOCK_WORKSPACE_DIR);
|
const settings = loadSettings(MOCK_WORKSPACE_DIR);
|
||||||
|
|
||||||
expect(settings.user.settings).toEqual(userSettingsContent);
|
expect(settings.user.settings).toEqual({
|
||||||
expect(settings.workspace.settings).toEqual(workspaceSettingsContent);
|
...userSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
});
|
||||||
|
expect(settings.workspace.settings).toEqual({
|
||||||
|
...workspaceSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
});
|
||||||
expect(settings.merged.mcpServers).toEqual({
|
expect(settings.merged.mcpServers).toEqual({
|
||||||
'user-server': {
|
'user-server': {
|
||||||
command: 'user-command',
|
command: 'user-command',
|
||||||
@@ -1696,9 +1867,13 @@ describe('Settings Loading and Merging', () => {
|
|||||||
'utf-8',
|
'utf-8',
|
||||||
);
|
);
|
||||||
expect(settings.system.path).toBe(MOCK_ENV_SYSTEM_SETTINGS_PATH);
|
expect(settings.system.path).toBe(MOCK_ENV_SYSTEM_SETTINGS_PATH);
|
||||||
expect(settings.system.settings).toEqual(systemSettingsContent);
|
expect(settings.system.settings).toEqual({
|
||||||
|
...systemSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
});
|
||||||
expect(settings.merged).toEqual({
|
expect(settings.merged).toEqual({
|
||||||
...systemSettingsContent,
|
...systemSettingsContent,
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
@@ -2248,6 +2423,44 @@ describe('Settings Loading and Merging', () => {
|
|||||||
customWittyPhrases: ['test phrase'],
|
customWittyPhrases: ['test phrase'],
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should remove version field when migrating to V1', () => {
|
||||||
|
const v2Settings = {
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
ui: {
|
||||||
|
theme: 'dark',
|
||||||
|
},
|
||||||
|
model: {
|
||||||
|
name: 'qwen-coder',
|
||||||
|
},
|
||||||
|
};
|
||||||
|
const v1Settings = migrateSettingsToV1(v2Settings);
|
||||||
|
|
||||||
|
// Version field should not be present in V1 settings
|
||||||
|
expect(v1Settings[SETTINGS_VERSION_KEY]).toBeUndefined();
|
||||||
|
// Other fields should be properly migrated
|
||||||
|
expect(v1Settings).toEqual({
|
||||||
|
theme: 'dark',
|
||||||
|
model: 'qwen-coder',
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle version field in unrecognized properties', () => {
|
||||||
|
const v2Settings = {
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
general: {
|
||||||
|
vimMode: true,
|
||||||
|
},
|
||||||
|
someUnrecognizedKey: 'value',
|
||||||
|
};
|
||||||
|
const v1Settings = migrateSettingsToV1(v2Settings);
|
||||||
|
|
||||||
|
// Version field should be filtered out
|
||||||
|
expect(v1Settings[SETTINGS_VERSION_KEY]).toBeUndefined();
|
||||||
|
// Unrecognized keys should be preserved
|
||||||
|
expect(v1Settings['someUnrecognizedKey']).toBe('value');
|
||||||
|
expect(v1Settings['vimMode']).toBe(true);
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('loadEnvironment', () => {
|
describe('loadEnvironment', () => {
|
||||||
@@ -2368,6 +2581,73 @@ describe('Settings Loading and Merging', () => {
|
|||||||
};
|
};
|
||||||
expect(needsMigration(settings)).toBe(false);
|
expect(needsMigration(settings)).toBe(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
describe('with version field', () => {
|
||||||
|
it('should return false when version field indicates current or newer version', () => {
|
||||||
|
const settingsWithVersion = {
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
theme: 'dark', // Even though this is a V1 key, version field takes precedence
|
||||||
|
};
|
||||||
|
expect(needsMigration(settingsWithVersion)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false when version field indicates a newer version', () => {
|
||||||
|
const settingsWithNewerVersion = {
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION + 1,
|
||||||
|
theme: 'dark',
|
||||||
|
};
|
||||||
|
expect(needsMigration(settingsWithNewerVersion)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return true when version field indicates an older version', () => {
|
||||||
|
const settingsWithOldVersion = {
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION - 1,
|
||||||
|
theme: 'dark',
|
||||||
|
};
|
||||||
|
expect(needsMigration(settingsWithOldVersion)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should use fallback logic when version field is not a number', () => {
|
||||||
|
const settingsWithInvalidVersion = {
|
||||||
|
[SETTINGS_VERSION_KEY]: 'not-a-number',
|
||||||
|
theme: 'dark',
|
||||||
|
};
|
||||||
|
expect(needsMigration(settingsWithInvalidVersion)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should use fallback logic when version field is missing', () => {
|
||||||
|
const settingsWithoutVersion = {
|
||||||
|
theme: 'dark',
|
||||||
|
};
|
||||||
|
expect(needsMigration(settingsWithoutVersion)).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('edge case: partially migrated settings', () => {
|
||||||
|
it('should return true for partially migrated settings without version field', () => {
|
||||||
|
// This simulates the dangerous edge case: model already in V2 format,
|
||||||
|
// but other fields in V1 format
|
||||||
|
const partiallyMigrated = {
|
||||||
|
model: {
|
||||||
|
name: 'qwen-coder',
|
||||||
|
},
|
||||||
|
autoAccept: false, // V1 key
|
||||||
|
};
|
||||||
|
expect(needsMigration(partiallyMigrated)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return false for partially migrated settings WITH version field', () => {
|
||||||
|
// With version field, we trust that it's been properly migrated
|
||||||
|
const partiallyMigratedWithVersion = {
|
||||||
|
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
|
||||||
|
model: {
|
||||||
|
name: 'qwen-coder',
|
||||||
|
},
|
||||||
|
autoAccept: false, // This would look like V1 but version says it's V2
|
||||||
|
};
|
||||||
|
expect(needsMigration(partiallyMigratedWithVersion)).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('migrateDeprecatedSettings', () => {
|
describe('migrateDeprecatedSettings', () => {
|
||||||
|
|||||||
@@ -56,6 +56,10 @@ export const DEFAULT_EXCLUDED_ENV_VARS = ['DEBUG', 'DEBUG_MODE'];
|
|||||||
|
|
||||||
const MIGRATE_V2_OVERWRITE = true;
|
const MIGRATE_V2_OVERWRITE = true;
|
||||||
|
|
||||||
|
// Settings version to track migration state
|
||||||
|
export const SETTINGS_VERSION = 2;
|
||||||
|
export const SETTINGS_VERSION_KEY = '$version';
|
||||||
|
|
||||||
const MIGRATION_MAP: Record<string, string> = {
|
const MIGRATION_MAP: Record<string, string> = {
|
||||||
accessibility: 'ui.accessibility',
|
accessibility: 'ui.accessibility',
|
||||||
allowedTools: 'tools.allowed',
|
allowedTools: 'tools.allowed',
|
||||||
@@ -127,6 +131,7 @@ const MIGRATION_MAP: Record<string, string> = {
|
|||||||
sessionTokenLimit: 'model.sessionTokenLimit',
|
sessionTokenLimit: 'model.sessionTokenLimit',
|
||||||
contentGenerator: 'model.generationConfig',
|
contentGenerator: 'model.generationConfig',
|
||||||
skipLoopDetection: 'model.skipLoopDetection',
|
skipLoopDetection: 'model.skipLoopDetection',
|
||||||
|
skipStartupContext: 'model.skipStartupContext',
|
||||||
enableOpenAILogging: 'model.enableOpenAILogging',
|
enableOpenAILogging: 'model.enableOpenAILogging',
|
||||||
tavilyApiKey: 'advanced.tavilyApiKey',
|
tavilyApiKey: 'advanced.tavilyApiKey',
|
||||||
vlmSwitchMode: 'experimental.vlmSwitchMode',
|
vlmSwitchMode: 'experimental.vlmSwitchMode',
|
||||||
@@ -216,8 +221,16 @@ function setNestedProperty(
|
|||||||
}
|
}
|
||||||
|
|
||||||
export function needsMigration(settings: Record<string, unknown>): boolean {
|
export function needsMigration(settings: Record<string, unknown>): boolean {
|
||||||
// A file needs migration if it contains any top-level key that is moved to a
|
// Check version field first - if present and matches current version, no migration needed
|
||||||
// nested location in V2.
|
if (SETTINGS_VERSION_KEY in settings) {
|
||||||
|
const version = settings[SETTINGS_VERSION_KEY];
|
||||||
|
if (typeof version === 'number' && version >= SETTINGS_VERSION) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback to legacy detection: A file needs migration if it contains any
|
||||||
|
// top-level key that is moved to a nested location in V2.
|
||||||
const hasV1Keys = Object.entries(MIGRATION_MAP).some(([v1Key, v2Path]) => {
|
const hasV1Keys = Object.entries(MIGRATION_MAP).some(([v1Key, v2Path]) => {
|
||||||
if (v1Key === v2Path || !(v1Key in settings)) {
|
if (v1Key === v2Path || !(v1Key in settings)) {
|
||||||
return false;
|
return false;
|
||||||
@@ -250,6 +263,21 @@ function migrateSettingsToV2(
|
|||||||
|
|
||||||
for (const [oldKey, newPath] of Object.entries(MIGRATION_MAP)) {
|
for (const [oldKey, newPath] of Object.entries(MIGRATION_MAP)) {
|
||||||
if (flatKeys.has(oldKey)) {
|
if (flatKeys.has(oldKey)) {
|
||||||
|
// Safety check: If this key is a V2 container (like 'model') and it's
|
||||||
|
// already an object, it's likely already in V2 format. Skip migration
|
||||||
|
// to prevent double-nesting (e.g., model.name.name).
|
||||||
|
if (
|
||||||
|
KNOWN_V2_CONTAINERS.has(oldKey) &&
|
||||||
|
typeof flatSettings[oldKey] === 'object' &&
|
||||||
|
flatSettings[oldKey] !== null &&
|
||||||
|
!Array.isArray(flatSettings[oldKey])
|
||||||
|
) {
|
||||||
|
// This is already a V2 container, carry it over as-is
|
||||||
|
v2Settings[oldKey] = flatSettings[oldKey];
|
||||||
|
flatKeys.delete(oldKey);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
setNestedProperty(v2Settings, newPath, flatSettings[oldKey]);
|
setNestedProperty(v2Settings, newPath, flatSettings[oldKey]);
|
||||||
flatKeys.delete(oldKey);
|
flatKeys.delete(oldKey);
|
||||||
}
|
}
|
||||||
@@ -287,6 +315,9 @@ function migrateSettingsToV2(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Set version field to indicate this is a V2 settings file
|
||||||
|
v2Settings[SETTINGS_VERSION_KEY] = SETTINGS_VERSION;
|
||||||
|
|
||||||
return v2Settings;
|
return v2Settings;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -336,6 +367,11 @@ export function migrateSettingsToV1(
|
|||||||
|
|
||||||
// Carry over any unrecognized keys
|
// Carry over any unrecognized keys
|
||||||
for (const remainingKey of v2Keys) {
|
for (const remainingKey of v2Keys) {
|
||||||
|
// Skip the version field - it's only for V2 format
|
||||||
|
if (remainingKey === SETTINGS_VERSION_KEY) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
const value = v2Settings[remainingKey];
|
const value = v2Settings[remainingKey];
|
||||||
if (value === undefined) {
|
if (value === undefined) {
|
||||||
continue;
|
continue;
|
||||||
@@ -621,6 +657,22 @@ export function loadSettings(
|
|||||||
}
|
}
|
||||||
settingsObject = migratedSettings;
|
settingsObject = migratedSettings;
|
||||||
}
|
}
|
||||||
|
} else if (!(SETTINGS_VERSION_KEY in settingsObject)) {
|
||||||
|
// No migration needed, but version field is missing - add it for future optimizations
|
||||||
|
settingsObject[SETTINGS_VERSION_KEY] = SETTINGS_VERSION;
|
||||||
|
if (MIGRATE_V2_OVERWRITE) {
|
||||||
|
try {
|
||||||
|
fs.writeFileSync(
|
||||||
|
filePath,
|
||||||
|
JSON.stringify(settingsObject, null, 2),
|
||||||
|
'utf-8',
|
||||||
|
);
|
||||||
|
} catch (e) {
|
||||||
|
console.error(
|
||||||
|
`Error adding version to settings file: ${getErrorMessage(e)}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return { settings: settingsObject as Settings, rawJson: content };
|
return { settings: settingsObject as Settings, rawJson: content };
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -12,6 +12,7 @@ import type {
|
|||||||
ChatCompressionSettings,
|
ChatCompressionSettings,
|
||||||
} from '@qwen-code/qwen-code-core';
|
} from '@qwen-code/qwen-code-core';
|
||||||
import {
|
import {
|
||||||
|
ApprovalMode,
|
||||||
DEFAULT_TRUNCATE_TOOL_OUTPUT_LINES,
|
DEFAULT_TRUNCATE_TOOL_OUTPUT_LINES,
|
||||||
DEFAULT_TRUNCATE_TOOL_OUTPUT_THRESHOLD,
|
DEFAULT_TRUNCATE_TOOL_OUTPUT_THRESHOLD,
|
||||||
} from '@qwen-code/qwen-code-core';
|
} from '@qwen-code/qwen-code-core';
|
||||||
@@ -549,6 +550,16 @@ const SETTINGS_SCHEMA = {
|
|||||||
description: 'Disable all loop detection checks (streaming and LLM).',
|
description: 'Disable all loop detection checks (streaming and LLM).',
|
||||||
showInDialog: true,
|
showInDialog: true,
|
||||||
},
|
},
|
||||||
|
skipStartupContext: {
|
||||||
|
type: 'boolean',
|
||||||
|
label: 'Skip Startup Context',
|
||||||
|
category: 'Model',
|
||||||
|
requiresRestart: true,
|
||||||
|
default: false,
|
||||||
|
description:
|
||||||
|
'Avoid sending the workspace startup context at the beginning of each session.',
|
||||||
|
showInDialog: true,
|
||||||
|
},
|
||||||
enableOpenAILogging: {
|
enableOpenAILogging: {
|
||||||
type: 'boolean',
|
type: 'boolean',
|
||||||
label: 'Enable OpenAI Logging',
|
label: 'Enable OpenAI Logging',
|
||||||
@@ -558,6 +569,16 @@ const SETTINGS_SCHEMA = {
|
|||||||
description: 'Enable OpenAI logging.',
|
description: 'Enable OpenAI logging.',
|
||||||
showInDialog: true,
|
showInDialog: true,
|
||||||
},
|
},
|
||||||
|
openAILoggingDir: {
|
||||||
|
type: 'string',
|
||||||
|
label: 'OpenAI Logging Directory',
|
||||||
|
category: 'Model',
|
||||||
|
requiresRestart: false,
|
||||||
|
default: undefined as string | undefined,
|
||||||
|
description:
|
||||||
|
'Custom directory path for OpenAI API logs. If not specified, defaults to logs/openai in the current working directory.',
|
||||||
|
showInDialog: true,
|
||||||
|
},
|
||||||
generationConfig: {
|
generationConfig: {
|
||||||
type: 'object',
|
type: 'object',
|
||||||
label: 'Generation Configuration',
|
label: 'Generation Configuration',
|
||||||
@@ -810,14 +831,20 @@ const SETTINGS_SCHEMA = {
|
|||||||
mergeStrategy: MergeStrategy.UNION,
|
mergeStrategy: MergeStrategy.UNION,
|
||||||
},
|
},
|
||||||
approvalMode: {
|
approvalMode: {
|
||||||
type: 'string',
|
type: 'enum',
|
||||||
label: 'Default Approval Mode',
|
label: 'Approval Mode',
|
||||||
category: 'Tools',
|
category: 'Tools',
|
||||||
requiresRestart: false,
|
requiresRestart: false,
|
||||||
default: 'default',
|
default: ApprovalMode.DEFAULT,
|
||||||
description:
|
description:
|
||||||
'Default approval mode for tool usage. Valid values: plan, default, auto-edit, yolo.',
|
'Approval mode for tool usage. Controls how tools are approved before execution.',
|
||||||
showInDialog: true,
|
showInDialog: true,
|
||||||
|
options: [
|
||||||
|
{ value: ApprovalMode.PLAN, label: 'Plan' },
|
||||||
|
{ value: ApprovalMode.DEFAULT, label: 'Default' },
|
||||||
|
{ value: ApprovalMode.AUTO_EDIT, label: 'Auto Edit' },
|
||||||
|
{ value: ApprovalMode.YOLO, label: 'YOLO' },
|
||||||
|
],
|
||||||
},
|
},
|
||||||
discoveryCommand: {
|
discoveryCommand: {
|
||||||
type: 'string',
|
type: 'string',
|
||||||
@@ -847,6 +874,16 @@ const SETTINGS_SCHEMA = {
|
|||||||
'Use ripgrep for file content search instead of the fallback implementation. Provides faster search performance.',
|
'Use ripgrep for file content search instead of the fallback implementation. Provides faster search performance.',
|
||||||
showInDialog: true,
|
showInDialog: true,
|
||||||
},
|
},
|
||||||
|
useBuiltinRipgrep: {
|
||||||
|
type: 'boolean',
|
||||||
|
label: 'Use Builtin Ripgrep',
|
||||||
|
category: 'Tools',
|
||||||
|
requiresRestart: false,
|
||||||
|
default: true,
|
||||||
|
description:
|
||||||
|
'Use the bundled ripgrep binary. When set to false, the system-level "rg" command will be used instead. This setting is only effective when useRipgrep is true.',
|
||||||
|
showInDialog: true,
|
||||||
|
},
|
||||||
enableToolOutputTruncation: {
|
enableToolOutputTruncation: {
|
||||||
type: 'boolean',
|
type: 'boolean',
|
||||||
label: 'Enable Tool Output Truncation',
|
label: 'Enable Tool Output Truncation',
|
||||||
@@ -991,6 +1028,24 @@ const SETTINGS_SCHEMA = {
|
|||||||
description: 'Whether to use an external authentication flow.',
|
description: 'Whether to use an external authentication flow.',
|
||||||
showInDialog: false,
|
showInDialog: false,
|
||||||
},
|
},
|
||||||
|
apiKey: {
|
||||||
|
type: 'string',
|
||||||
|
label: 'API Key',
|
||||||
|
category: 'Security',
|
||||||
|
requiresRestart: true,
|
||||||
|
default: undefined as string | undefined,
|
||||||
|
description: 'API key for OpenAI compatible authentication.',
|
||||||
|
showInDialog: false,
|
||||||
|
},
|
||||||
|
baseUrl: {
|
||||||
|
type: 'string',
|
||||||
|
label: 'Base URL',
|
||||||
|
category: 'Security',
|
||||||
|
requiresRestart: true,
|
||||||
|
default: undefined as string | undefined,
|
||||||
|
description: 'Base URL for OpenAI compatible API.',
|
||||||
|
showInDialog: false,
|
||||||
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
@@ -1044,17 +1099,36 @@ const SETTINGS_SCHEMA = {
|
|||||||
},
|
},
|
||||||
tavilyApiKey: {
|
tavilyApiKey: {
|
||||||
type: 'string',
|
type: 'string',
|
||||||
label: 'Tavily API Key',
|
label: 'Tavily API Key (Deprecated)',
|
||||||
category: 'Advanced',
|
category: 'Advanced',
|
||||||
requiresRestart: false,
|
requiresRestart: false,
|
||||||
default: undefined as string | undefined,
|
default: undefined as string | undefined,
|
||||||
description:
|
description:
|
||||||
'The API key for the Tavily API. Required to enable the web_search tool functionality.',
|
'⚠️ DEPRECATED: Please use webSearch.provider configuration instead. Legacy API key for the Tavily API.',
|
||||||
showInDialog: false,
|
showInDialog: false,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
|
||||||
|
webSearch: {
|
||||||
|
type: 'object',
|
||||||
|
label: 'Web Search',
|
||||||
|
category: 'Advanced',
|
||||||
|
requiresRestart: true,
|
||||||
|
default: undefined as
|
||||||
|
| {
|
||||||
|
provider: Array<{
|
||||||
|
type: 'tavily' | 'google' | 'dashscope';
|
||||||
|
apiKey?: string;
|
||||||
|
searchEngineId?: string;
|
||||||
|
}>;
|
||||||
|
default: string;
|
||||||
|
}
|
||||||
|
| undefined,
|
||||||
|
description: 'Configuration for web search providers.',
|
||||||
|
showInDialog: false,
|
||||||
|
},
|
||||||
|
|
||||||
experimental: {
|
experimental: {
|
||||||
type: 'object',
|
type: 'object',
|
||||||
label: 'Experimental',
|
label: 'Experimental',
|
||||||
|
|||||||
121
packages/cli/src/config/webSearch.ts
Normal file
121
packages/cli/src/config/webSearch.ts
Normal file
@@ -0,0 +1,121 @@
|
|||||||
|
/**
|
||||||
|
* @license
|
||||||
|
* Copyright 2025 Qwen
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { AuthType } from '@qwen-code/qwen-code-core';
|
||||||
|
import type { WebSearchProviderConfig } from '@qwen-code/qwen-code-core';
|
||||||
|
import type { Settings } from './settings.js';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* CLI arguments related to web search configuration
|
||||||
|
*/
|
||||||
|
export interface WebSearchCliArgs {
|
||||||
|
tavilyApiKey?: string;
|
||||||
|
googleApiKey?: string;
|
||||||
|
googleSearchEngineId?: string;
|
||||||
|
webSearchDefault?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Web search configuration structure
|
||||||
|
*/
|
||||||
|
export interface WebSearchConfig {
|
||||||
|
provider: WebSearchProviderConfig[];
|
||||||
|
default: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Build webSearch configuration from multiple sources with priority:
|
||||||
|
* 1. settings.json (new format) - highest priority
|
||||||
|
* 2. Command line args + environment variables
|
||||||
|
* 3. Legacy tavilyApiKey (backward compatibility)
|
||||||
|
*
|
||||||
|
* @param argv - Command line arguments
|
||||||
|
* @param settings - User settings from settings.json
|
||||||
|
* @param authType - Authentication type (e.g., 'qwen-oauth')
|
||||||
|
* @returns WebSearch configuration or undefined if no providers available
|
||||||
|
*/
|
||||||
|
export function buildWebSearchConfig(
|
||||||
|
argv: WebSearchCliArgs,
|
||||||
|
settings: Settings,
|
||||||
|
authType?: string,
|
||||||
|
): WebSearchConfig | undefined {
|
||||||
|
const isQwenOAuth = authType === AuthType.QWEN_OAUTH;
|
||||||
|
|
||||||
|
// Step 1: Collect providers from settings or command line/env
|
||||||
|
let providers: WebSearchProviderConfig[] = [];
|
||||||
|
let userDefault: string | undefined;
|
||||||
|
|
||||||
|
if (settings.webSearch) {
|
||||||
|
// Use providers from settings.json
|
||||||
|
providers = [...settings.webSearch.provider];
|
||||||
|
userDefault = settings.webSearch.default;
|
||||||
|
} else {
|
||||||
|
// Build providers from command line args and environment variables
|
||||||
|
const tavilyKey =
|
||||||
|
argv.tavilyApiKey ||
|
||||||
|
settings.advanced?.tavilyApiKey ||
|
||||||
|
process.env['TAVILY_API_KEY'];
|
||||||
|
if (tavilyKey) {
|
||||||
|
providers.push({
|
||||||
|
type: 'tavily',
|
||||||
|
apiKey: tavilyKey,
|
||||||
|
} as WebSearchProviderConfig);
|
||||||
|
}
|
||||||
|
|
||||||
|
const googleKey = argv.googleApiKey || process.env['GOOGLE_API_KEY'];
|
||||||
|
const googleEngineId =
|
||||||
|
argv.googleSearchEngineId || process.env['GOOGLE_SEARCH_ENGINE_ID'];
|
||||||
|
if (googleKey && googleEngineId) {
|
||||||
|
providers.push({
|
||||||
|
type: 'google',
|
||||||
|
apiKey: googleKey,
|
||||||
|
searchEngineId: googleEngineId,
|
||||||
|
} as WebSearchProviderConfig);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 2: Ensure dashscope is available for qwen-oauth users
|
||||||
|
if (isQwenOAuth) {
|
||||||
|
const hasDashscope = providers.some((p) => p.type === 'dashscope');
|
||||||
|
if (!hasDashscope) {
|
||||||
|
providers.push({ type: 'dashscope' } as WebSearchProviderConfig);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 3: If no providers available, return undefined
|
||||||
|
if (providers.length === 0) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 4: Determine default provider
|
||||||
|
// Priority: user explicit config > CLI arg > first available provider (tavily > google > dashscope)
|
||||||
|
const providerPriority: Array<'tavily' | 'google' | 'dashscope'> = [
|
||||||
|
'tavily',
|
||||||
|
'google',
|
||||||
|
'dashscope',
|
||||||
|
];
|
||||||
|
|
||||||
|
// Determine default provider based on availability
|
||||||
|
let defaultProvider = userDefault || argv.webSearchDefault;
|
||||||
|
if (!defaultProvider) {
|
||||||
|
// Find first available provider by priority order
|
||||||
|
for (const providerType of providerPriority) {
|
||||||
|
if (providers.some((p) => p.type === providerType)) {
|
||||||
|
defaultProvider = providerType;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Fallback to first available provider if none found in priority list
|
||||||
|
if (!defaultProvider) {
|
||||||
|
defaultProvider = providers[0]?.type || 'dashscope';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
provider: providers,
|
||||||
|
default: defaultProvider,
|
||||||
|
};
|
||||||
|
}
|
||||||
@@ -327,9 +327,13 @@ describe('gemini.tsx main function kitty protocol', () => {
|
|||||||
openaiLogging: undefined,
|
openaiLogging: undefined,
|
||||||
openaiApiKey: undefined,
|
openaiApiKey: undefined,
|
||||||
openaiBaseUrl: undefined,
|
openaiBaseUrl: undefined,
|
||||||
|
openaiLoggingDir: undefined,
|
||||||
proxy: undefined,
|
proxy: undefined,
|
||||||
includeDirectories: undefined,
|
includeDirectories: undefined,
|
||||||
tavilyApiKey: undefined,
|
tavilyApiKey: undefined,
|
||||||
|
googleApiKey: undefined,
|
||||||
|
googleSearchEngineId: undefined,
|
||||||
|
webSearchDefault: undefined,
|
||||||
screenReader: undefined,
|
screenReader: undefined,
|
||||||
vlmSwitchMode: undefined,
|
vlmSwitchMode: undefined,
|
||||||
useSmartEdit: undefined,
|
useSmartEdit: undefined,
|
||||||
|
|||||||
@@ -17,11 +17,7 @@ import dns from 'node:dns';
|
|||||||
import { randomUUID } from 'node:crypto';
|
import { randomUUID } from 'node:crypto';
|
||||||
import { start_sandbox } from './utils/sandbox.js';
|
import { start_sandbox } from './utils/sandbox.js';
|
||||||
import type { DnsResolutionOrder, LoadedSettings } from './config/settings.js';
|
import type { DnsResolutionOrder, LoadedSettings } from './config/settings.js';
|
||||||
import {
|
import { loadSettings, migrateDeprecatedSettings } from './config/settings.js';
|
||||||
loadSettings,
|
|
||||||
migrateDeprecatedSettings,
|
|
||||||
SettingScope,
|
|
||||||
} from './config/settings.js';
|
|
||||||
import { themeManager } from './ui/themes/theme-manager.js';
|
import { themeManager } from './ui/themes/theme-manager.js';
|
||||||
import { getStartupWarnings } from './utils/startupWarnings.js';
|
import { getStartupWarnings } from './utils/startupWarnings.js';
|
||||||
import { getUserStartupWarnings } from './utils/userStartupWarnings.js';
|
import { getUserStartupWarnings } from './utils/userStartupWarnings.js';
|
||||||
@@ -233,17 +229,6 @@ export async function main() {
|
|||||||
validateDnsResolutionOrder(settings.merged.advanced?.dnsResolutionOrder),
|
validateDnsResolutionOrder(settings.merged.advanced?.dnsResolutionOrder),
|
||||||
);
|
);
|
||||||
|
|
||||||
// Set a default auth type if one isn't set.
|
|
||||||
if (!settings.merged.security?.auth?.selectedType) {
|
|
||||||
if (process.env['CLOUD_SHELL'] === 'true') {
|
|
||||||
settings.setValue(
|
|
||||||
SettingScope.User,
|
|
||||||
'selectedAuthType',
|
|
||||||
AuthType.CLOUD_SHELL,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load custom themes from settings
|
// Load custom themes from settings
|
||||||
themeManager.loadCustomThemes(settings.merged.ui?.customThemes);
|
themeManager.loadCustomThemes(settings.merged.ui?.customThemes);
|
||||||
|
|
||||||
@@ -402,7 +387,11 @@ export async function main() {
|
|||||||
let input = config.getQuestion();
|
let input = config.getQuestion();
|
||||||
const startupWarnings = [
|
const startupWarnings = [
|
||||||
...(await getStartupWarnings()),
|
...(await getStartupWarnings()),
|
||||||
...(await getUserStartupWarnings()),
|
...(await getUserStartupWarnings({
|
||||||
|
workspaceRoot: process.cwd(),
|
||||||
|
useRipgrep: settings.merged.tools?.useRipgrep ?? true,
|
||||||
|
useBuiltinRipgrep: settings.merged.tools?.useBuiltinRipgrep ?? true,
|
||||||
|
})),
|
||||||
];
|
];
|
||||||
|
|
||||||
// Render UI, passing necessary config values. Check that there is no command line question.
|
// Render UI, passing necessary config values. Check that there is no command line question.
|
||||||
|
|||||||
@@ -1227,4 +1227,28 @@ describe('FileCommandLoader', () => {
|
|||||||
expect(commands).toHaveLength(0);
|
expect(commands).toHaveLength(0);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
|
describe('AbortError handling', () => {
|
||||||
|
it('should silently ignore AbortError when operation is cancelled', async () => {
|
||||||
|
const userCommandsDir = Storage.getUserCommandsDir();
|
||||||
|
mock({
|
||||||
|
[userCommandsDir]: {
|
||||||
|
'test1.toml': 'prompt = "Prompt 1"',
|
||||||
|
'test2.toml': 'prompt = "Prompt 2"',
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
const loader = new FileCommandLoader(null);
|
||||||
|
const controller = new AbortController();
|
||||||
|
const signal = controller.signal;
|
||||||
|
|
||||||
|
// Start loading and immediately abort
|
||||||
|
const loadPromise = loader.loadCommands(signal);
|
||||||
|
controller.abort();
|
||||||
|
|
||||||
|
// Should not throw or print errors
|
||||||
|
const commands = await loadPromise;
|
||||||
|
expect(commands).toHaveLength(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -120,7 +120,11 @@ export class FileCommandLoader implements ICommandLoader {
|
|||||||
// Add all commands without deduplication
|
// Add all commands without deduplication
|
||||||
allCommands.push(...commands);
|
allCommands.push(...commands);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
if ((error as NodeJS.ErrnoException).code !== 'ENOENT') {
|
// Ignore ENOENT (directory doesn't exist) and AbortError (operation was cancelled)
|
||||||
|
const isEnoent = (error as NodeJS.ErrnoException).code === 'ENOENT';
|
||||||
|
const isAbortError =
|
||||||
|
error instanceof Error && error.name === 'AbortError';
|
||||||
|
if (!isEnoent && !isAbortError) {
|
||||||
console.error(
|
console.error(
|
||||||
`[FileCommandLoader] Error loading commands from ${dirInfo.path}:`,
|
`[FileCommandLoader] Error loading commands from ${dirInfo.path}:`,
|
||||||
error,
|
error,
|
||||||
|
|||||||
@@ -53,6 +53,7 @@ import { useQuotaAndFallback } from './hooks/useQuotaAndFallback.js';
|
|||||||
import { useEditorSettings } from './hooks/useEditorSettings.js';
|
import { useEditorSettings } from './hooks/useEditorSettings.js';
|
||||||
import { useSettingsCommand } from './hooks/useSettingsCommand.js';
|
import { useSettingsCommand } from './hooks/useSettingsCommand.js';
|
||||||
import { useModelCommand } from './hooks/useModelCommand.js';
|
import { useModelCommand } from './hooks/useModelCommand.js';
|
||||||
|
import { useApprovalModeCommand } from './hooks/useApprovalModeCommand.js';
|
||||||
import { useSlashCommandProcessor } from './hooks/slashCommandProcessor.js';
|
import { useSlashCommandProcessor } from './hooks/slashCommandProcessor.js';
|
||||||
import { useVimMode } from './contexts/VimModeContext.js';
|
import { useVimMode } from './contexts/VimModeContext.js';
|
||||||
import { useConsoleMessages } from './hooks/useConsoleMessages.js';
|
import { useConsoleMessages } from './hooks/useConsoleMessages.js';
|
||||||
@@ -335,6 +336,12 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||||||
initializationResult.themeError,
|
initializationResult.themeError,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
const {
|
||||||
|
isApprovalModeDialogOpen,
|
||||||
|
openApprovalModeDialog,
|
||||||
|
handleApprovalModeSelect,
|
||||||
|
} = useApprovalModeCommand(settings, config);
|
||||||
|
|
||||||
const {
|
const {
|
||||||
setAuthState,
|
setAuthState,
|
||||||
authError,
|
authError,
|
||||||
@@ -470,6 +477,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||||||
openSettingsDialog,
|
openSettingsDialog,
|
||||||
openModelDialog,
|
openModelDialog,
|
||||||
openPermissionsDialog,
|
openPermissionsDialog,
|
||||||
|
openApprovalModeDialog,
|
||||||
quit: (messages: HistoryItem[]) => {
|
quit: (messages: HistoryItem[]) => {
|
||||||
setQuittingMessages(messages);
|
setQuittingMessages(messages);
|
||||||
setTimeout(async () => {
|
setTimeout(async () => {
|
||||||
@@ -495,6 +503,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||||||
setCorgiMode,
|
setCorgiMode,
|
||||||
dispatchExtensionStateUpdate,
|
dispatchExtensionStateUpdate,
|
||||||
openPermissionsDialog,
|
openPermissionsDialog,
|
||||||
|
openApprovalModeDialog,
|
||||||
addConfirmUpdateExtensionRequest,
|
addConfirmUpdateExtensionRequest,
|
||||||
showQuitConfirmation,
|
showQuitConfirmation,
|
||||||
openSubagentCreateDialog,
|
openSubagentCreateDialog,
|
||||||
@@ -551,6 +560,11 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||||||
[visionSwitchResolver],
|
[visionSwitchResolver],
|
||||||
);
|
);
|
||||||
|
|
||||||
|
// onDebugMessage should log to console, not update footer debugMessage
|
||||||
|
const onDebugMessage = useCallback((message: string) => {
|
||||||
|
console.debug(message);
|
||||||
|
}, []);
|
||||||
|
|
||||||
const performMemoryRefresh = useCallback(async () => {
|
const performMemoryRefresh = useCallback(async () => {
|
||||||
historyManager.addItem(
|
historyManager.addItem(
|
||||||
{
|
{
|
||||||
@@ -628,7 +642,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||||||
historyManager.addItem,
|
historyManager.addItem,
|
||||||
config,
|
config,
|
||||||
settings,
|
settings,
|
||||||
setDebugMessage,
|
onDebugMessage,
|
||||||
handleSlashCommand,
|
handleSlashCommand,
|
||||||
shellModeActive,
|
shellModeActive,
|
||||||
() => settings.merged.general?.preferredEditor as EditorType,
|
() => settings.merged.general?.preferredEditor as EditorType,
|
||||||
@@ -916,17 +930,9 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||||||
(result: IdeIntegrationNudgeResult) => {
|
(result: IdeIntegrationNudgeResult) => {
|
||||||
if (result.userSelection === 'yes') {
|
if (result.userSelection === 'yes') {
|
||||||
handleSlashCommand('/ide install');
|
handleSlashCommand('/ide install');
|
||||||
settings.setValue(
|
settings.setValue(SettingScope.User, 'ide.hasSeenNudge', true);
|
||||||
SettingScope.User,
|
|
||||||
'hasSeenIdeIntegrationNudge',
|
|
||||||
true,
|
|
||||||
);
|
|
||||||
} else if (result.userSelection === 'dismiss') {
|
} else if (result.userSelection === 'dismiss') {
|
||||||
settings.setValue(
|
settings.setValue(SettingScope.User, 'ide.hasSeenNudge', true);
|
||||||
SettingScope.User,
|
|
||||||
'hasSeenIdeIntegrationNudge',
|
|
||||||
true,
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
setIdePromptAnswered(true);
|
setIdePromptAnswered(true);
|
||||||
},
|
},
|
||||||
@@ -942,6 +948,8 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||||||
const { closeAnyOpenDialog } = useDialogClose({
|
const { closeAnyOpenDialog } = useDialogClose({
|
||||||
isThemeDialogOpen,
|
isThemeDialogOpen,
|
||||||
handleThemeSelect,
|
handleThemeSelect,
|
||||||
|
isApprovalModeDialogOpen,
|
||||||
|
handleApprovalModeSelect,
|
||||||
isAuthDialogOpen,
|
isAuthDialogOpen,
|
||||||
handleAuthSelect,
|
handleAuthSelect,
|
||||||
selectedAuthType: settings.merged.security?.auth?.selectedType,
|
selectedAuthType: settings.merged.security?.auth?.selectedType,
|
||||||
@@ -1191,7 +1199,8 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||||||
showIdeRestartPrompt ||
|
showIdeRestartPrompt ||
|
||||||
!!proQuotaRequest ||
|
!!proQuotaRequest ||
|
||||||
isSubagentCreateDialogOpen ||
|
isSubagentCreateDialogOpen ||
|
||||||
isAgentsManagerDialogOpen;
|
isAgentsManagerDialogOpen ||
|
||||||
|
isApprovalModeDialogOpen;
|
||||||
|
|
||||||
const pendingHistoryItems = useMemo(
|
const pendingHistoryItems = useMemo(
|
||||||
() => [...pendingSlashCommandHistoryItems, ...pendingGeminiHistoryItems],
|
() => [...pendingSlashCommandHistoryItems, ...pendingGeminiHistoryItems],
|
||||||
@@ -1222,6 +1231,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||||||
isSettingsDialogOpen,
|
isSettingsDialogOpen,
|
||||||
isModelDialogOpen,
|
isModelDialogOpen,
|
||||||
isPermissionsDialogOpen,
|
isPermissionsDialogOpen,
|
||||||
|
isApprovalModeDialogOpen,
|
||||||
slashCommands,
|
slashCommands,
|
||||||
pendingSlashCommandHistoryItems,
|
pendingSlashCommandHistoryItems,
|
||||||
commandContext,
|
commandContext,
|
||||||
@@ -1316,6 +1326,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||||||
isSettingsDialogOpen,
|
isSettingsDialogOpen,
|
||||||
isModelDialogOpen,
|
isModelDialogOpen,
|
||||||
isPermissionsDialogOpen,
|
isPermissionsDialogOpen,
|
||||||
|
isApprovalModeDialogOpen,
|
||||||
slashCommands,
|
slashCommands,
|
||||||
pendingSlashCommandHistoryItems,
|
pendingSlashCommandHistoryItems,
|
||||||
commandContext,
|
commandContext,
|
||||||
@@ -1396,6 +1407,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||||||
() => ({
|
() => ({
|
||||||
handleThemeSelect,
|
handleThemeSelect,
|
||||||
handleThemeHighlight,
|
handleThemeHighlight,
|
||||||
|
handleApprovalModeSelect,
|
||||||
handleAuthSelect,
|
handleAuthSelect,
|
||||||
setAuthState,
|
setAuthState,
|
||||||
onAuthError,
|
onAuthError,
|
||||||
@@ -1431,6 +1443,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
|||||||
[
|
[
|
||||||
handleThemeSelect,
|
handleThemeSelect,
|
||||||
handleThemeHighlight,
|
handleThemeHighlight,
|
||||||
|
handleApprovalModeSelect,
|
||||||
handleAuthSelect,
|
handleAuthSelect,
|
||||||
setAuthState,
|
setAuthState,
|
||||||
onAuthError,
|
onAuthError,
|
||||||
|
|||||||
@@ -8,12 +8,7 @@ import type React from 'react';
|
|||||||
import { useState } from 'react';
|
import { useState } from 'react';
|
||||||
import { AuthType } from '@qwen-code/qwen-code-core';
|
import { AuthType } from '@qwen-code/qwen-code-core';
|
||||||
import { Box, Text } from 'ink';
|
import { Box, Text } from 'ink';
|
||||||
import {
|
import { validateAuthMethod } from '../../config/auth.js';
|
||||||
setOpenAIApiKey,
|
|
||||||
setOpenAIBaseUrl,
|
|
||||||
setOpenAIModel,
|
|
||||||
validateAuthMethod,
|
|
||||||
} from '../../config/auth.js';
|
|
||||||
import { type LoadedSettings, SettingScope } from '../../config/settings.js';
|
import { type LoadedSettings, SettingScope } from '../../config/settings.js';
|
||||||
import { Colors } from '../colors.js';
|
import { Colors } from '../colors.js';
|
||||||
import { useKeypress } from '../hooks/useKeypress.js';
|
import { useKeypress } from '../hooks/useKeypress.js';
|
||||||
@@ -21,7 +16,15 @@ import { OpenAIKeyPrompt } from '../components/OpenAIKeyPrompt.js';
|
|||||||
import { RadioButtonSelect } from '../components/shared/RadioButtonSelect.js';
|
import { RadioButtonSelect } from '../components/shared/RadioButtonSelect.js';
|
||||||
|
|
||||||
interface AuthDialogProps {
|
interface AuthDialogProps {
|
||||||
onSelect: (authMethod: AuthType | undefined, scope: SettingScope) => void;
|
onSelect: (
|
||||||
|
authMethod: AuthType | undefined,
|
||||||
|
scope: SettingScope,
|
||||||
|
credentials?: {
|
||||||
|
apiKey?: string;
|
||||||
|
baseUrl?: string;
|
||||||
|
model?: string;
|
||||||
|
},
|
||||||
|
) => void;
|
||||||
settings: LoadedSettings;
|
settings: LoadedSettings;
|
||||||
initialErrorMessage?: string | null;
|
initialErrorMessage?: string | null;
|
||||||
}
|
}
|
||||||
@@ -70,11 +73,7 @@ export function AuthDialog({
|
|||||||
return item.value === defaultAuthType;
|
return item.value === defaultAuthType;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (process.env['GEMINI_API_KEY']) {
|
return item.value === AuthType.QWEN_OAUTH;
|
||||||
return item.value === AuthType.USE_GEMINI;
|
|
||||||
}
|
|
||||||
|
|
||||||
return item.value === AuthType.LOGIN_WITH_GOOGLE;
|
|
||||||
}),
|
}),
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -101,11 +100,12 @@ export function AuthDialog({
|
|||||||
baseUrl: string,
|
baseUrl: string,
|
||||||
model: string,
|
model: string,
|
||||||
) => {
|
) => {
|
||||||
setOpenAIApiKey(apiKey);
|
|
||||||
setOpenAIBaseUrl(baseUrl);
|
|
||||||
setOpenAIModel(model);
|
|
||||||
setShowOpenAIKeyPrompt(false);
|
setShowOpenAIKeyPrompt(false);
|
||||||
onSelect(AuthType.USE_OPENAI, SettingScope.User);
|
onSelect(AuthType.USE_OPENAI, SettingScope.User, {
|
||||||
|
apiKey,
|
||||||
|
baseUrl,
|
||||||
|
model,
|
||||||
|
});
|
||||||
};
|
};
|
||||||
|
|
||||||
const handleOpenAIKeyCancel = () => {
|
const handleOpenAIKeyCancel = () => {
|
||||||
|
|||||||
@@ -6,12 +6,11 @@
|
|||||||
|
|
||||||
import { useState, useCallback, useEffect } from 'react';
|
import { useState, useCallback, useEffect } from 'react';
|
||||||
import type { LoadedSettings, SettingScope } from '../../config/settings.js';
|
import type { LoadedSettings, SettingScope } from '../../config/settings.js';
|
||||||
import { AuthType, type Config } from '@qwen-code/qwen-code-core';
|
import type { AuthType, Config } from '@qwen-code/qwen-code-core';
|
||||||
import {
|
import {
|
||||||
clearCachedCredentialFile,
|
clearCachedCredentialFile,
|
||||||
getErrorMessage,
|
getErrorMessage,
|
||||||
} from '@qwen-code/qwen-code-core';
|
} from '@qwen-code/qwen-code-core';
|
||||||
import { runExitCleanup } from '../../utils/cleanup.js';
|
|
||||||
import { AuthState } from '../types.js';
|
import { AuthState } from '../types.js';
|
||||||
import { validateAuthMethod } from '../../config/auth.js';
|
import { validateAuthMethod } from '../../config/auth.js';
|
||||||
|
|
||||||
@@ -47,6 +46,7 @@ export const useAuthCommand = (settings: LoadedSettings, config: Config) => {
|
|||||||
setAuthError(error);
|
setAuthError(error);
|
||||||
if (error) {
|
if (error) {
|
||||||
setAuthState(AuthState.Updating);
|
setAuthState(AuthState.Updating);
|
||||||
|
setIsAuthDialogOpen(true);
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
[setAuthError, setAuthState],
|
[setAuthError, setAuthState],
|
||||||
@@ -87,24 +87,49 @@ export const useAuthCommand = (settings: LoadedSettings, config: Config) => {
|
|||||||
|
|
||||||
// Handle auth selection from dialog
|
// Handle auth selection from dialog
|
||||||
const handleAuthSelect = useCallback(
|
const handleAuthSelect = useCallback(
|
||||||
async (authType: AuthType | undefined, scope: SettingScope) => {
|
async (
|
||||||
|
authType: AuthType | undefined,
|
||||||
|
scope: SettingScope,
|
||||||
|
credentials?: {
|
||||||
|
apiKey?: string;
|
||||||
|
baseUrl?: string;
|
||||||
|
model?: string;
|
||||||
|
},
|
||||||
|
) => {
|
||||||
if (authType) {
|
if (authType) {
|
||||||
await clearCachedCredentialFile();
|
await clearCachedCredentialFile();
|
||||||
|
|
||||||
settings.setValue(scope, 'security.auth.selectedType', authType);
|
// Save OpenAI credentials if provided
|
||||||
|
if (credentials) {
|
||||||
|
// Update Config's internal generationConfig before calling refreshAuth
|
||||||
|
// This ensures refreshAuth has access to the new credentials
|
||||||
|
config.updateCredentials({
|
||||||
|
apiKey: credentials.apiKey,
|
||||||
|
baseUrl: credentials.baseUrl,
|
||||||
|
model: credentials.model,
|
||||||
|
});
|
||||||
|
|
||||||
if (
|
// Also set environment variables for compatibility with other parts of the code
|
||||||
authType === AuthType.LOGIN_WITH_GOOGLE &&
|
if (credentials.apiKey) {
|
||||||
config.isBrowserLaunchSuppressed()
|
settings.setValue(
|
||||||
) {
|
scope,
|
||||||
await runExitCleanup();
|
'security.auth.apiKey',
|
||||||
console.log(`
|
credentials.apiKey,
|
||||||
----------------------------------------------------------------
|
);
|
||||||
Logging in with Google... Please restart Gemini CLI to continue.
|
}
|
||||||
----------------------------------------------------------------
|
if (credentials.baseUrl) {
|
||||||
`);
|
settings.setValue(
|
||||||
process.exit(0);
|
scope,
|
||||||
|
'security.auth.baseUrl',
|
||||||
|
credentials.baseUrl,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
if (credentials.model) {
|
||||||
|
settings.setValue(scope, 'model.name', credentials.model);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
settings.setValue(scope, 'security.auth.selectedType', authType);
|
||||||
}
|
}
|
||||||
|
|
||||||
setIsAuthDialogOpen(false);
|
setIsAuthDialogOpen(false);
|
||||||
|
|||||||
@@ -8,38 +8,22 @@ import { describe, it, expect, beforeEach, vi, afterEach } from 'vitest';
|
|||||||
import { aboutCommand } from './aboutCommand.js';
|
import { aboutCommand } from './aboutCommand.js';
|
||||||
import { type CommandContext } from './types.js';
|
import { type CommandContext } from './types.js';
|
||||||
import { createMockCommandContext } from '../../test-utils/mockCommandContext.js';
|
import { createMockCommandContext } from '../../test-utils/mockCommandContext.js';
|
||||||
import * as versionUtils from '../../utils/version.js';
|
|
||||||
import { MessageType } from '../types.js';
|
import { MessageType } from '../types.js';
|
||||||
import { IdeClient } from '@qwen-code/qwen-code-core';
|
import * as systemInfoUtils from '../../utils/systemInfo.js';
|
||||||
|
|
||||||
vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => {
|
vi.mock('../../utils/systemInfo.js');
|
||||||
const actual =
|
|
||||||
await importOriginal<typeof import('@qwen-code/qwen-code-core')>();
|
|
||||||
return {
|
|
||||||
...actual,
|
|
||||||
IdeClient: {
|
|
||||||
getInstance: vi.fn().mockResolvedValue({
|
|
||||||
getDetectedIdeDisplayName: vi.fn().mockReturnValue('test-ide'),
|
|
||||||
}),
|
|
||||||
},
|
|
||||||
};
|
|
||||||
});
|
|
||||||
|
|
||||||
vi.mock('../../utils/version.js', () => ({
|
|
||||||
getCliVersion: vi.fn(),
|
|
||||||
}));
|
|
||||||
|
|
||||||
describe('aboutCommand', () => {
|
describe('aboutCommand', () => {
|
||||||
let mockContext: CommandContext;
|
let mockContext: CommandContext;
|
||||||
const originalPlatform = process.platform;
|
|
||||||
const originalEnv = { ...process.env };
|
const originalEnv = { ...process.env };
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
mockContext = createMockCommandContext({
|
mockContext = createMockCommandContext({
|
||||||
services: {
|
services: {
|
||||||
config: {
|
config: {
|
||||||
getModel: vi.fn(),
|
getModel: vi.fn().mockReturnValue('test-model'),
|
||||||
getIdeMode: vi.fn().mockReturnValue(true),
|
getIdeMode: vi.fn().mockReturnValue(true),
|
||||||
|
getSessionId: vi.fn().mockReturnValue('test-session-id'),
|
||||||
},
|
},
|
||||||
settings: {
|
settings: {
|
||||||
merged: {
|
merged: {
|
||||||
@@ -56,21 +40,25 @@ describe('aboutCommand', () => {
|
|||||||
},
|
},
|
||||||
} as unknown as CommandContext);
|
} as unknown as CommandContext);
|
||||||
|
|
||||||
vi.mocked(versionUtils.getCliVersion).mockResolvedValue('test-version');
|
vi.mocked(systemInfoUtils.getExtendedSystemInfo).mockResolvedValue({
|
||||||
vi.spyOn(mockContext.services.config!, 'getModel').mockReturnValue(
|
cliVersion: 'test-version',
|
||||||
'test-model',
|
osPlatform: 'test-os',
|
||||||
);
|
osArch: 'x64',
|
||||||
process.env['GOOGLE_CLOUD_PROJECT'] = 'test-gcp-project';
|
osRelease: '22.0.0',
|
||||||
Object.defineProperty(process, 'platform', {
|
nodeVersion: 'v20.0.0',
|
||||||
value: 'test-os',
|
npmVersion: '10.0.0',
|
||||||
|
sandboxEnv: 'no sandbox',
|
||||||
|
modelVersion: 'test-model',
|
||||||
|
selectedAuthType: 'test-auth',
|
||||||
|
ideClient: 'test-ide',
|
||||||
|
sessionId: 'test-session-id',
|
||||||
|
memoryUsage: '100 MB',
|
||||||
|
baseUrl: undefined,
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
vi.unstubAllEnvs();
|
vi.unstubAllEnvs();
|
||||||
Object.defineProperty(process, 'platform', {
|
|
||||||
value: originalPlatform,
|
|
||||||
});
|
|
||||||
process.env = originalEnv;
|
process.env = originalEnv;
|
||||||
vi.clearAllMocks();
|
vi.clearAllMocks();
|
||||||
});
|
});
|
||||||
@@ -81,30 +69,55 @@ describe('aboutCommand', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should call addItem with all version info', async () => {
|
it('should call addItem with all version info', async () => {
|
||||||
process.env['SANDBOX'] = '';
|
|
||||||
if (!aboutCommand.action) {
|
if (!aboutCommand.action) {
|
||||||
throw new Error('The about command must have an action.');
|
throw new Error('The about command must have an action.');
|
||||||
}
|
}
|
||||||
|
|
||||||
await aboutCommand.action(mockContext, '');
|
await aboutCommand.action(mockContext, '');
|
||||||
|
|
||||||
|
expect(systemInfoUtils.getExtendedSystemInfo).toHaveBeenCalledWith(
|
||||||
|
mockContext,
|
||||||
|
);
|
||||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||||
{
|
expect.objectContaining({
|
||||||
type: MessageType.ABOUT,
|
type: MessageType.ABOUT,
|
||||||
cliVersion: 'test-version',
|
systemInfo: expect.objectContaining({
|
||||||
osVersion: 'test-os',
|
cliVersion: 'test-version',
|
||||||
sandboxEnv: 'no sandbox',
|
osPlatform: 'test-os',
|
||||||
modelVersion: 'test-model',
|
osArch: 'x64',
|
||||||
selectedAuthType: 'test-auth',
|
osRelease: '22.0.0',
|
||||||
gcpProject: 'test-gcp-project',
|
nodeVersion: 'v20.0.0',
|
||||||
ideClient: 'test-ide',
|
npmVersion: '10.0.0',
|
||||||
},
|
sandboxEnv: 'no sandbox',
|
||||||
|
modelVersion: 'test-model',
|
||||||
|
selectedAuthType: 'test-auth',
|
||||||
|
ideClient: 'test-ide',
|
||||||
|
sessionId: 'test-session-id',
|
||||||
|
memoryUsage: '100 MB',
|
||||||
|
baseUrl: undefined,
|
||||||
|
}),
|
||||||
|
}),
|
||||||
expect.any(Number),
|
expect.any(Number),
|
||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should show the correct sandbox environment variable', async () => {
|
it('should show the correct sandbox environment variable', async () => {
|
||||||
process.env['SANDBOX'] = 'gemini-sandbox';
|
vi.mocked(systemInfoUtils.getExtendedSystemInfo).mockResolvedValue({
|
||||||
|
cliVersion: 'test-version',
|
||||||
|
osPlatform: 'test-os',
|
||||||
|
osArch: 'x64',
|
||||||
|
osRelease: '22.0.0',
|
||||||
|
nodeVersion: 'v20.0.0',
|
||||||
|
npmVersion: '10.0.0',
|
||||||
|
sandboxEnv: 'gemini-sandbox',
|
||||||
|
modelVersion: 'test-model',
|
||||||
|
selectedAuthType: 'test-auth',
|
||||||
|
ideClient: 'test-ide',
|
||||||
|
sessionId: 'test-session-id',
|
||||||
|
memoryUsage: '100 MB',
|
||||||
|
baseUrl: undefined,
|
||||||
|
});
|
||||||
|
|
||||||
if (!aboutCommand.action) {
|
if (!aboutCommand.action) {
|
||||||
throw new Error('The about command must have an action.');
|
throw new Error('The about command must have an action.');
|
||||||
}
|
}
|
||||||
@@ -113,15 +126,32 @@ describe('aboutCommand', () => {
|
|||||||
|
|
||||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||||
expect.objectContaining({
|
expect.objectContaining({
|
||||||
sandboxEnv: 'gemini-sandbox',
|
type: MessageType.ABOUT,
|
||||||
|
systemInfo: expect.objectContaining({
|
||||||
|
sandboxEnv: 'gemini-sandbox',
|
||||||
|
}),
|
||||||
}),
|
}),
|
||||||
expect.any(Number),
|
expect.any(Number),
|
||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should show sandbox-exec profile when applicable', async () => {
|
it('should show sandbox-exec profile when applicable', async () => {
|
||||||
process.env['SANDBOX'] = 'sandbox-exec';
|
vi.mocked(systemInfoUtils.getExtendedSystemInfo).mockResolvedValue({
|
||||||
process.env['SEATBELT_PROFILE'] = 'test-profile';
|
cliVersion: 'test-version',
|
||||||
|
osPlatform: 'test-os',
|
||||||
|
osArch: 'x64',
|
||||||
|
osRelease: '22.0.0',
|
||||||
|
nodeVersion: 'v20.0.0',
|
||||||
|
npmVersion: '10.0.0',
|
||||||
|
sandboxEnv: 'sandbox-exec (test-profile)',
|
||||||
|
modelVersion: 'test-model',
|
||||||
|
selectedAuthType: 'test-auth',
|
||||||
|
ideClient: 'test-ide',
|
||||||
|
sessionId: 'test-session-id',
|
||||||
|
memoryUsage: '100 MB',
|
||||||
|
baseUrl: undefined,
|
||||||
|
});
|
||||||
|
|
||||||
if (!aboutCommand.action) {
|
if (!aboutCommand.action) {
|
||||||
throw new Error('The about command must have an action.');
|
throw new Error('The about command must have an action.');
|
||||||
}
|
}
|
||||||
@@ -130,18 +160,31 @@ describe('aboutCommand', () => {
|
|||||||
|
|
||||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||||
expect.objectContaining({
|
expect.objectContaining({
|
||||||
sandboxEnv: 'sandbox-exec (test-profile)',
|
systemInfo: expect.objectContaining({
|
||||||
|
sandboxEnv: 'sandbox-exec (test-profile)',
|
||||||
|
}),
|
||||||
}),
|
}),
|
||||||
expect.any(Number),
|
expect.any(Number),
|
||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should not show ide client when it is not detected', async () => {
|
it('should not show ide client when it is not detected', async () => {
|
||||||
vi.mocked(IdeClient.getInstance).mockResolvedValue({
|
vi.mocked(systemInfoUtils.getExtendedSystemInfo).mockResolvedValue({
|
||||||
getDetectedIdeDisplayName: vi.fn().mockReturnValue(undefined),
|
cliVersion: 'test-version',
|
||||||
} as unknown as IdeClient);
|
osPlatform: 'test-os',
|
||||||
|
osArch: 'x64',
|
||||||
|
osRelease: '22.0.0',
|
||||||
|
nodeVersion: 'v20.0.0',
|
||||||
|
npmVersion: '10.0.0',
|
||||||
|
sandboxEnv: 'no sandbox',
|
||||||
|
modelVersion: 'test-model',
|
||||||
|
selectedAuthType: 'test-auth',
|
||||||
|
ideClient: '',
|
||||||
|
sessionId: 'test-session-id',
|
||||||
|
memoryUsage: '100 MB',
|
||||||
|
baseUrl: undefined,
|
||||||
|
});
|
||||||
|
|
||||||
process.env['SANDBOX'] = '';
|
|
||||||
if (!aboutCommand.action) {
|
if (!aboutCommand.action) {
|
||||||
throw new Error('The about command must have an action.');
|
throw new Error('The about command must have an action.');
|
||||||
}
|
}
|
||||||
@@ -151,13 +194,87 @@ describe('aboutCommand', () => {
|
|||||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||||
expect.objectContaining({
|
expect.objectContaining({
|
||||||
type: MessageType.ABOUT,
|
type: MessageType.ABOUT,
|
||||||
cliVersion: 'test-version',
|
systemInfo: expect.objectContaining({
|
||||||
osVersion: 'test-os',
|
cliVersion: 'test-version',
|
||||||
sandboxEnv: 'no sandbox',
|
osPlatform: 'test-os',
|
||||||
modelVersion: 'test-model',
|
osArch: 'x64',
|
||||||
selectedAuthType: 'test-auth',
|
osRelease: '22.0.0',
|
||||||
gcpProject: 'test-gcp-project',
|
nodeVersion: 'v20.0.0',
|
||||||
ideClient: '',
|
npmVersion: '10.0.0',
|
||||||
|
sandboxEnv: 'no sandbox',
|
||||||
|
modelVersion: 'test-model',
|
||||||
|
selectedAuthType: 'test-auth',
|
||||||
|
ideClient: '',
|
||||||
|
sessionId: 'test-session-id',
|
||||||
|
memoryUsage: '100 MB',
|
||||||
|
baseUrl: undefined,
|
||||||
|
}),
|
||||||
|
}),
|
||||||
|
expect.any(Number),
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should show unknown npmVersion when npm command fails', async () => {
|
||||||
|
vi.mocked(systemInfoUtils.getExtendedSystemInfo).mockResolvedValue({
|
||||||
|
cliVersion: 'test-version',
|
||||||
|
osPlatform: 'test-os',
|
||||||
|
osArch: 'x64',
|
||||||
|
osRelease: '22.0.0',
|
||||||
|
nodeVersion: 'v20.0.0',
|
||||||
|
npmVersion: 'unknown',
|
||||||
|
sandboxEnv: 'no sandbox',
|
||||||
|
modelVersion: 'test-model',
|
||||||
|
selectedAuthType: 'test-auth',
|
||||||
|
ideClient: 'test-ide',
|
||||||
|
sessionId: 'test-session-id',
|
||||||
|
memoryUsage: '100 MB',
|
||||||
|
baseUrl: undefined,
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!aboutCommand.action) {
|
||||||
|
throw new Error('The about command must have an action.');
|
||||||
|
}
|
||||||
|
|
||||||
|
await aboutCommand.action(mockContext, '');
|
||||||
|
|
||||||
|
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||||
|
expect.objectContaining({
|
||||||
|
systemInfo: expect.objectContaining({
|
||||||
|
npmVersion: 'unknown',
|
||||||
|
}),
|
||||||
|
}),
|
||||||
|
expect.any(Number),
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should show unknown sessionId when config is not available', async () => {
|
||||||
|
vi.mocked(systemInfoUtils.getExtendedSystemInfo).mockResolvedValue({
|
||||||
|
cliVersion: 'test-version',
|
||||||
|
osPlatform: 'test-os',
|
||||||
|
osArch: 'x64',
|
||||||
|
osRelease: '22.0.0',
|
||||||
|
nodeVersion: 'v20.0.0',
|
||||||
|
npmVersion: '10.0.0',
|
||||||
|
sandboxEnv: 'no sandbox',
|
||||||
|
modelVersion: 'Unknown',
|
||||||
|
selectedAuthType: 'test-auth',
|
||||||
|
ideClient: '',
|
||||||
|
sessionId: 'unknown',
|
||||||
|
memoryUsage: '100 MB',
|
||||||
|
baseUrl: undefined,
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!aboutCommand.action) {
|
||||||
|
throw new Error('The about command must have an action.');
|
||||||
|
}
|
||||||
|
|
||||||
|
await aboutCommand.action(mockContext, '');
|
||||||
|
|
||||||
|
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||||
|
expect.objectContaining({
|
||||||
|
systemInfo: expect.objectContaining({
|
||||||
|
sessionId: 'unknown',
|
||||||
|
}),
|
||||||
}),
|
}),
|
||||||
expect.any(Number),
|
expect.any(Number),
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -4,53 +4,23 @@
|
|||||||
* SPDX-License-Identifier: Apache-2.0
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { getCliVersion } from '../../utils/version.js';
|
import type { SlashCommand } from './types.js';
|
||||||
import type { CommandContext, SlashCommand } from './types.js';
|
|
||||||
import { CommandKind } from './types.js';
|
import { CommandKind } from './types.js';
|
||||||
import process from 'node:process';
|
|
||||||
import { MessageType, type HistoryItemAbout } from '../types.js';
|
import { MessageType, type HistoryItemAbout } from '../types.js';
|
||||||
import { IdeClient } from '@qwen-code/qwen-code-core';
|
import { getExtendedSystemInfo } from '../../utils/systemInfo.js';
|
||||||
|
|
||||||
export const aboutCommand: SlashCommand = {
|
export const aboutCommand: SlashCommand = {
|
||||||
name: 'about',
|
name: 'about',
|
||||||
description: 'show version info',
|
description: 'show version info',
|
||||||
kind: CommandKind.BUILT_IN,
|
kind: CommandKind.BUILT_IN,
|
||||||
action: async (context) => {
|
action: async (context) => {
|
||||||
const osVersion = process.platform;
|
const systemInfo = await getExtendedSystemInfo(context);
|
||||||
let sandboxEnv = 'no sandbox';
|
|
||||||
if (process.env['SANDBOX'] && process.env['SANDBOX'] !== 'sandbox-exec') {
|
|
||||||
sandboxEnv = process.env['SANDBOX'];
|
|
||||||
} else if (process.env['SANDBOX'] === 'sandbox-exec') {
|
|
||||||
sandboxEnv = `sandbox-exec (${
|
|
||||||
process.env['SEATBELT_PROFILE'] || 'unknown'
|
|
||||||
})`;
|
|
||||||
}
|
|
||||||
const modelVersion = context.services.config?.getModel() || 'Unknown';
|
|
||||||
const cliVersion = await getCliVersion();
|
|
||||||
const selectedAuthType =
|
|
||||||
context.services.settings.merged.security?.auth?.selectedType || '';
|
|
||||||
const gcpProject = process.env['GOOGLE_CLOUD_PROJECT'] || '';
|
|
||||||
const ideClient = await getIdeClientName(context);
|
|
||||||
|
|
||||||
const aboutItem: Omit<HistoryItemAbout, 'id'> = {
|
const aboutItem: Omit<HistoryItemAbout, 'id'> = {
|
||||||
type: MessageType.ABOUT,
|
type: MessageType.ABOUT,
|
||||||
cliVersion,
|
systemInfo,
|
||||||
osVersion,
|
|
||||||
sandboxEnv,
|
|
||||||
modelVersion,
|
|
||||||
selectedAuthType,
|
|
||||||
gcpProject,
|
|
||||||
ideClient,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
context.ui.addItem(aboutItem, Date.now());
|
context.ui.addItem(aboutItem, Date.now());
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
async function getIdeClientName(context: CommandContext) {
|
|
||||||
if (!context.services.config?.getIdeMode()) {
|
|
||||||
return '';
|
|
||||||
}
|
|
||||||
const ideClient = await IdeClient.getInstance();
|
|
||||||
return ideClient?.getDetectedIdeDisplayName() ?? '';
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -4,492 +4,68 @@
|
|||||||
* SPDX-License-Identifier: Apache-2.0
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
import { describe, it, expect } from 'vitest';
|
||||||
import { approvalModeCommand } from './approvalModeCommand.js';
|
import { approvalModeCommand } from './approvalModeCommand.js';
|
||||||
import {
|
import {
|
||||||
type CommandContext,
|
type CommandContext,
|
||||||
CommandKind,
|
CommandKind,
|
||||||
type MessageActionReturn,
|
type OpenDialogActionReturn,
|
||||||
} from './types.js';
|
} from './types.js';
|
||||||
import { createMockCommandContext } from '../../test-utils/mockCommandContext.js';
|
import { createMockCommandContext } from '../../test-utils/mockCommandContext.js';
|
||||||
import { ApprovalMode } from '@qwen-code/qwen-code-core';
|
import type { LoadedSettings } from '../../config/settings.js';
|
||||||
import { SettingScope, type LoadedSettings } from '../../config/settings.js';
|
|
||||||
|
|
||||||
describe('approvalModeCommand', () => {
|
describe('approvalModeCommand', () => {
|
||||||
let mockContext: CommandContext;
|
let mockContext: CommandContext;
|
||||||
let setApprovalModeMock: ReturnType<typeof vi.fn>;
|
|
||||||
let setSettingsValueMock: ReturnType<typeof vi.fn>;
|
|
||||||
const originalEnv = { ...process.env };
|
|
||||||
const userSettingsPath = '/mock/user/settings.json';
|
|
||||||
const projectSettingsPath = '/mock/project/settings.json';
|
|
||||||
const userSettingsFile = { path: userSettingsPath, settings: {} };
|
|
||||||
const projectSettingsFile = { path: projectSettingsPath, settings: {} };
|
|
||||||
|
|
||||||
const getModeSubCommand = (mode: ApprovalMode) =>
|
|
||||||
approvalModeCommand.subCommands?.find((cmd) => cmd.name === mode);
|
|
||||||
|
|
||||||
const getScopeSubCommand = (
|
|
||||||
mode: ApprovalMode,
|
|
||||||
scope: '--session' | '--user' | '--project',
|
|
||||||
) => getModeSubCommand(mode)?.subCommands?.find((cmd) => cmd.name === scope);
|
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
setApprovalModeMock = vi.fn();
|
|
||||||
setSettingsValueMock = vi.fn();
|
|
||||||
|
|
||||||
mockContext = createMockCommandContext({
|
mockContext = createMockCommandContext({
|
||||||
services: {
|
services: {
|
||||||
config: {
|
config: {
|
||||||
getApprovalMode: vi.fn().mockReturnValue(ApprovalMode.DEFAULT),
|
getApprovalMode: () => 'default',
|
||||||
setApprovalMode: setApprovalModeMock,
|
setApprovalMode: () => {},
|
||||||
},
|
},
|
||||||
settings: {
|
settings: {
|
||||||
merged: {},
|
merged: {},
|
||||||
setValue: setSettingsValueMock,
|
setValue: () => {},
|
||||||
forScope: vi
|
forScope: () => ({}),
|
||||||
.fn()
|
|
||||||
.mockImplementation((scope: SettingScope) =>
|
|
||||||
scope === SettingScope.User
|
|
||||||
? userSettingsFile
|
|
||||||
: scope === SettingScope.Workspace
|
|
||||||
? projectSettingsFile
|
|
||||||
: { path: '', settings: {} },
|
|
||||||
),
|
|
||||||
} as unknown as LoadedSettings,
|
} as unknown as LoadedSettings,
|
||||||
},
|
},
|
||||||
} as unknown as CommandContext);
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(() => {
|
it('should have correct metadata', () => {
|
||||||
process.env = { ...originalEnv };
|
|
||||||
vi.clearAllMocks();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should have the correct command properties', () => {
|
|
||||||
expect(approvalModeCommand.name).toBe('approval-mode');
|
expect(approvalModeCommand.name).toBe('approval-mode');
|
||||||
expect(approvalModeCommand.kind).toBe(CommandKind.BUILT_IN);
|
|
||||||
expect(approvalModeCommand.description).toBe(
|
expect(approvalModeCommand.description).toBe(
|
||||||
'View or change the approval mode for tool usage',
|
'View or change the approval mode for tool usage',
|
||||||
);
|
);
|
||||||
|
expect(approvalModeCommand.kind).toBe(CommandKind.BUILT_IN);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should show current mode, options, and usage when no arguments provided', async () => {
|
it('should open approval mode dialog when invoked', async () => {
|
||||||
if (!approvalModeCommand.action) {
|
const result = (await approvalModeCommand.action?.(
|
||||||
throw new Error('approvalModeCommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await approvalModeCommand.action(
|
|
||||||
mockContext,
|
mockContext,
|
||||||
'',
|
'',
|
||||||
)) as MessageActionReturn;
|
)) as OpenDialogActionReturn;
|
||||||
|
|
||||||
expect(result.type).toBe('message');
|
expect(result.type).toBe('dialog');
|
||||||
expect(result.messageType).toBe('info');
|
expect(result.dialog).toBe('approval-mode');
|
||||||
const expectedMessage = [
|
|
||||||
'Current approval mode: default',
|
|
||||||
'',
|
|
||||||
'Available approval modes:',
|
|
||||||
' - plan: Plan mode - Analyze only, do not modify files or execute commands',
|
|
||||||
' - default: Default mode - Require approval for file edits or shell commands',
|
|
||||||
' - auto-edit: Auto-edit mode - Automatically approve file edits',
|
|
||||||
' - yolo: YOLO mode - Automatically approve all tools',
|
|
||||||
'',
|
|
||||||
'Usage: /approval-mode <mode> [--session|--user|--project]',
|
|
||||||
].join('\n');
|
|
||||||
expect(result.content).toBe(expectedMessage);
|
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should display error when config is not available', async () => {
|
it('should open approval mode dialog with arguments (ignored)', async () => {
|
||||||
if (!approvalModeCommand.action) {
|
const result = (await approvalModeCommand.action?.(
|
||||||
throw new Error('approvalModeCommand must have an action.');
|
mockContext,
|
||||||
}
|
'some arguments',
|
||||||
|
)) as OpenDialogActionReturn;
|
||||||
|
|
||||||
const nullConfigContext = createMockCommandContext({
|
expect(result.type).toBe('dialog');
|
||||||
services: {
|
expect(result.dialog).toBe('approval-mode');
|
||||||
config: null,
|
|
||||||
},
|
|
||||||
} as unknown as CommandContext);
|
|
||||||
|
|
||||||
const result = (await approvalModeCommand.action(
|
|
||||||
nullConfigContext,
|
|
||||||
'',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(result.type).toBe('message');
|
|
||||||
expect(result.messageType).toBe('error');
|
|
||||||
expect(result.content).toBe('Configuration not available.');
|
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should change approval mode when valid mode is provided', async () => {
|
it('should not have subcommands', () => {
|
||||||
if (!approvalModeCommand.action) {
|
expect(approvalModeCommand.subCommands).toBeUndefined();
|
||||||
throw new Error('approvalModeCommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await approvalModeCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'plan',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(setApprovalModeMock).toHaveBeenCalledWith(ApprovalMode.PLAN);
|
|
||||||
expect(setSettingsValueMock).not.toHaveBeenCalled();
|
|
||||||
expect(result.type).toBe('message');
|
|
||||||
expect(result.messageType).toBe('info');
|
|
||||||
expect(result.content).toBe('Approval mode changed to: plan');
|
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should accept canonical auto-edit mode value', async () => {
|
it('should not have completion function', () => {
|
||||||
if (!approvalModeCommand.action) {
|
expect(approvalModeCommand.completion).toBeUndefined();
|
||||||
throw new Error('approvalModeCommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await approvalModeCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'auto-edit',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(setApprovalModeMock).toHaveBeenCalledWith(ApprovalMode.AUTO_EDIT);
|
|
||||||
expect(setSettingsValueMock).not.toHaveBeenCalled();
|
|
||||||
expect(result.type).toBe('message');
|
|
||||||
expect(result.messageType).toBe('info');
|
|
||||||
expect(result.content).toBe('Approval mode changed to: auto-edit');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should accept auto-edit alias for compatibility', async () => {
|
|
||||||
if (!approvalModeCommand.action) {
|
|
||||||
throw new Error('approvalModeCommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await approvalModeCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'auto-edit',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(setApprovalModeMock).toHaveBeenCalledWith(ApprovalMode.AUTO_EDIT);
|
|
||||||
expect(setSettingsValueMock).not.toHaveBeenCalled();
|
|
||||||
expect(result.content).toBe('Approval mode changed to: auto-edit');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should display error when invalid mode is provided', async () => {
|
|
||||||
if (!approvalModeCommand.action) {
|
|
||||||
throw new Error('approvalModeCommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await approvalModeCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'invalid',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(result.type).toBe('message');
|
|
||||||
expect(result.messageType).toBe('error');
|
|
||||||
expect(result.content).toContain('Invalid approval mode: invalid');
|
|
||||||
expect(result.content).toContain('Available approval modes:');
|
|
||||||
expect(result.content).toContain(
|
|
||||||
'Usage: /approval-mode <mode> [--session|--user|--project]',
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should display error when setApprovalMode throws an error', async () => {
|
|
||||||
if (!approvalModeCommand.action) {
|
|
||||||
throw new Error('approvalModeCommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const errorMessage = 'Failed to set approval mode';
|
|
||||||
mockContext.services.config!.setApprovalMode = vi
|
|
||||||
.fn()
|
|
||||||
.mockImplementation(() => {
|
|
||||||
throw new Error(errorMessage);
|
|
||||||
});
|
|
||||||
|
|
||||||
const result = (await approvalModeCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'plan',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(result.type).toBe('message');
|
|
||||||
expect(result.messageType).toBe('error');
|
|
||||||
expect(result.content).toBe(
|
|
||||||
`Failed to change approval mode: ${errorMessage}`,
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should allow selecting auto-edit with user scope via nested subcommands', async () => {
|
|
||||||
if (!approvalModeCommand.subCommands) {
|
|
||||||
throw new Error('approvalModeCommand must have subCommands.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const userSubCommand = getScopeSubCommand(ApprovalMode.AUTO_EDIT, '--user');
|
|
||||||
if (!userSubCommand?.action) {
|
|
||||||
throw new Error('--user scope subcommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await userSubCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(setApprovalModeMock).toHaveBeenCalledWith(ApprovalMode.AUTO_EDIT);
|
|
||||||
expect(setSettingsValueMock).toHaveBeenCalledWith(
|
|
||||||
SettingScope.User,
|
|
||||||
'approvalMode',
|
|
||||||
'auto-edit',
|
|
||||||
);
|
|
||||||
expect(result.content).toBe(
|
|
||||||
`Approval mode changed to: auto-edit (saved to user settings at ${userSettingsPath})`,
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should allow selecting plan with project scope via nested subcommands', async () => {
|
|
||||||
if (!approvalModeCommand.subCommands) {
|
|
||||||
throw new Error('approvalModeCommand must have subCommands.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const projectSubCommand = getScopeSubCommand(
|
|
||||||
ApprovalMode.PLAN,
|
|
||||||
'--project',
|
|
||||||
);
|
|
||||||
if (!projectSubCommand?.action) {
|
|
||||||
throw new Error('--project scope subcommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await projectSubCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(setApprovalModeMock).toHaveBeenCalledWith(ApprovalMode.PLAN);
|
|
||||||
expect(setSettingsValueMock).toHaveBeenCalledWith(
|
|
||||||
SettingScope.Workspace,
|
|
||||||
'approvalMode',
|
|
||||||
'plan',
|
|
||||||
);
|
|
||||||
expect(result.content).toBe(
|
|
||||||
`Approval mode changed to: plan (saved to project settings at ${projectSettingsPath})`,
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should allow selecting plan with session scope via nested subcommands', async () => {
|
|
||||||
if (!approvalModeCommand.subCommands) {
|
|
||||||
throw new Error('approvalModeCommand must have subCommands.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const sessionSubCommand = getScopeSubCommand(
|
|
||||||
ApprovalMode.PLAN,
|
|
||||||
'--session',
|
|
||||||
);
|
|
||||||
if (!sessionSubCommand?.action) {
|
|
||||||
throw new Error('--session scope subcommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await sessionSubCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(setApprovalModeMock).toHaveBeenCalledWith(ApprovalMode.PLAN);
|
|
||||||
expect(setSettingsValueMock).not.toHaveBeenCalled();
|
|
||||||
expect(result.content).toBe('Approval mode changed to: plan');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should allow providing a scope argument after selecting a mode subcommand', async () => {
|
|
||||||
if (!approvalModeCommand.subCommands) {
|
|
||||||
throw new Error('approvalModeCommand must have subCommands.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const planSubCommand = getModeSubCommand(ApprovalMode.PLAN);
|
|
||||||
if (!planSubCommand?.action) {
|
|
||||||
throw new Error('plan subcommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await planSubCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'--user',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(setApprovalModeMock).toHaveBeenCalledWith(ApprovalMode.PLAN);
|
|
||||||
expect(setSettingsValueMock).toHaveBeenCalledWith(
|
|
||||||
SettingScope.User,
|
|
||||||
'approvalMode',
|
|
||||||
'plan',
|
|
||||||
);
|
|
||||||
expect(result.content).toBe(
|
|
||||||
`Approval mode changed to: plan (saved to user settings at ${userSettingsPath})`,
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should support --user plan pattern (scope first)', async () => {
|
|
||||||
if (!approvalModeCommand.action) {
|
|
||||||
throw new Error('approvalModeCommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await approvalModeCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'--user plan',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(setApprovalModeMock).toHaveBeenCalledWith(ApprovalMode.PLAN);
|
|
||||||
expect(setSettingsValueMock).toHaveBeenCalledWith(
|
|
||||||
SettingScope.User,
|
|
||||||
'approvalMode',
|
|
||||||
'plan',
|
|
||||||
);
|
|
||||||
expect(result.content).toBe(
|
|
||||||
`Approval mode changed to: plan (saved to user settings at ${userSettingsPath})`,
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should support plan --user pattern (mode first)', async () => {
|
|
||||||
if (!approvalModeCommand.action) {
|
|
||||||
throw new Error('approvalModeCommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await approvalModeCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'plan --user',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(setApprovalModeMock).toHaveBeenCalledWith(ApprovalMode.PLAN);
|
|
||||||
expect(setSettingsValueMock).toHaveBeenCalledWith(
|
|
||||||
SettingScope.User,
|
|
||||||
'approvalMode',
|
|
||||||
'plan',
|
|
||||||
);
|
|
||||||
expect(result.content).toBe(
|
|
||||||
`Approval mode changed to: plan (saved to user settings at ${userSettingsPath})`,
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should support --project auto-edit pattern', async () => {
|
|
||||||
if (!approvalModeCommand.action) {
|
|
||||||
throw new Error('approvalModeCommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await approvalModeCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'--project auto-edit',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(setApprovalModeMock).toHaveBeenCalledWith(ApprovalMode.AUTO_EDIT);
|
|
||||||
expect(setSettingsValueMock).toHaveBeenCalledWith(
|
|
||||||
SettingScope.Workspace,
|
|
||||||
'approvalMode',
|
|
||||||
'auto-edit',
|
|
||||||
);
|
|
||||||
expect(result.content).toBe(
|
|
||||||
`Approval mode changed to: auto-edit (saved to project settings at ${projectSettingsPath})`,
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should display error when only scope flag is provided', async () => {
|
|
||||||
if (!approvalModeCommand.action) {
|
|
||||||
throw new Error('approvalModeCommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await approvalModeCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'--user',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(result.type).toBe('message');
|
|
||||||
expect(result.messageType).toBe('error');
|
|
||||||
expect(result.content).toContain('Missing approval mode');
|
|
||||||
expect(setApprovalModeMock).not.toHaveBeenCalled();
|
|
||||||
expect(setSettingsValueMock).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should display error when multiple scope flags are provided', async () => {
|
|
||||||
if (!approvalModeCommand.action) {
|
|
||||||
throw new Error('approvalModeCommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await approvalModeCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'--user --project plan',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(result.type).toBe('message');
|
|
||||||
expect(result.messageType).toBe('error');
|
|
||||||
expect(result.content).toContain('Multiple scope flags provided');
|
|
||||||
expect(setApprovalModeMock).not.toHaveBeenCalled();
|
|
||||||
expect(setSettingsValueMock).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should surface a helpful error when scope subcommands receive extra arguments', async () => {
|
|
||||||
if (!approvalModeCommand.subCommands) {
|
|
||||||
throw new Error('approvalModeCommand must have subCommands.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const userSubCommand = getScopeSubCommand(ApprovalMode.DEFAULT, '--user');
|
|
||||||
if (!userSubCommand?.action) {
|
|
||||||
throw new Error('--user scope subcommand must have an action.');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = (await userSubCommand.action(
|
|
||||||
mockContext,
|
|
||||||
'extra',
|
|
||||||
)) as MessageActionReturn;
|
|
||||||
|
|
||||||
expect(result.type).toBe('message');
|
|
||||||
expect(result.messageType).toBe('error');
|
|
||||||
expect(result.content).toBe(
|
|
||||||
'Scope subcommands do not accept additional arguments.',
|
|
||||||
);
|
|
||||||
expect(setApprovalModeMock).not.toHaveBeenCalled();
|
|
||||||
expect(setSettingsValueMock).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should provide completion for approval modes', async () => {
|
|
||||||
if (!approvalModeCommand.completion) {
|
|
||||||
throw new Error('approvalModeCommand must have a completion function.');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test partial mode completion
|
|
||||||
const result = await approvalModeCommand.completion(mockContext, 'p');
|
|
||||||
expect(result).toEqual(['plan']);
|
|
||||||
|
|
||||||
const result2 = await approvalModeCommand.completion(mockContext, 'a');
|
|
||||||
expect(result2).toEqual(['auto-edit']);
|
|
||||||
|
|
||||||
// Test empty completion - should suggest available modes first
|
|
||||||
const result3 = await approvalModeCommand.completion(mockContext, '');
|
|
||||||
expect(result3).toEqual(['plan', 'default', 'auto-edit', 'yolo']);
|
|
||||||
|
|
||||||
const result4 = await approvalModeCommand.completion(mockContext, 'AUTO');
|
|
||||||
expect(result4).toEqual(['auto-edit']);
|
|
||||||
|
|
||||||
// Test mode first pattern: 'plan ' should suggest scope flags
|
|
||||||
const result5 = await approvalModeCommand.completion(mockContext, 'plan ');
|
|
||||||
expect(result5).toEqual(['--session', '--project', '--user']);
|
|
||||||
|
|
||||||
const result6 = await approvalModeCommand.completion(
|
|
||||||
mockContext,
|
|
||||||
'plan --u',
|
|
||||||
);
|
|
||||||
expect(result6).toEqual(['--user']);
|
|
||||||
|
|
||||||
// Test scope first pattern: '--user ' should suggest modes
|
|
||||||
const result7 = await approvalModeCommand.completion(
|
|
||||||
mockContext,
|
|
||||||
'--user ',
|
|
||||||
);
|
|
||||||
expect(result7).toEqual(['plan', 'default', 'auto-edit', 'yolo']);
|
|
||||||
|
|
||||||
const result8 = await approvalModeCommand.completion(
|
|
||||||
mockContext,
|
|
||||||
'--user p',
|
|
||||||
);
|
|
||||||
expect(result8).toEqual(['plan']);
|
|
||||||
|
|
||||||
// Test completed patterns should return empty
|
|
||||||
const result9 = await approvalModeCommand.completion(
|
|
||||||
mockContext,
|
|
||||||
'plan --user ',
|
|
||||||
);
|
|
||||||
expect(result9).toEqual([]);
|
|
||||||
|
|
||||||
const result10 = await approvalModeCommand.completion(
|
|
||||||
mockContext,
|
|
||||||
'--user plan ',
|
|
||||||
);
|
|
||||||
expect(result10).toEqual([]);
|
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -7,428 +7,19 @@
|
|||||||
import type {
|
import type {
|
||||||
SlashCommand,
|
SlashCommand,
|
||||||
CommandContext,
|
CommandContext,
|
||||||
MessageActionReturn,
|
OpenDialogActionReturn,
|
||||||
} from './types.js';
|
} from './types.js';
|
||||||
import { CommandKind } from './types.js';
|
import { CommandKind } from './types.js';
|
||||||
import { ApprovalMode, APPROVAL_MODES } from '@qwen-code/qwen-code-core';
|
|
||||||
import { SettingScope } from '../../config/settings.js';
|
|
||||||
|
|
||||||
const USAGE_MESSAGE =
|
|
||||||
'Usage: /approval-mode <mode> [--session|--user|--project]';
|
|
||||||
|
|
||||||
const normalizeInputMode = (value: string): string =>
|
|
||||||
value.trim().toLowerCase();
|
|
||||||
|
|
||||||
const tokenizeArgs = (args: string): string[] => {
|
|
||||||
const matches = args.match(/(?:"[^"]*"|'[^']*'|[^\s"']+)/g);
|
|
||||||
if (!matches) {
|
|
||||||
return [];
|
|
||||||
}
|
|
||||||
|
|
||||||
return matches.map((token) => {
|
|
||||||
if (
|
|
||||||
(token.startsWith('"') && token.endsWith('"')) ||
|
|
||||||
(token.startsWith("'") && token.endsWith("'"))
|
|
||||||
) {
|
|
||||||
return token.slice(1, -1);
|
|
||||||
}
|
|
||||||
return token;
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
const parseApprovalMode = (value: string | null): ApprovalMode | null => {
|
|
||||||
if (!value) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
const normalized = normalizeInputMode(value).replace(/_/g, '-');
|
|
||||||
const matchIndex = APPROVAL_MODES.findIndex(
|
|
||||||
(candidate) => candidate === normalized,
|
|
||||||
);
|
|
||||||
|
|
||||||
return matchIndex === -1 ? null : APPROVAL_MODES[matchIndex];
|
|
||||||
};
|
|
||||||
|
|
||||||
const formatModeDescription = (mode: ApprovalMode): string => {
|
|
||||||
switch (mode) {
|
|
||||||
case ApprovalMode.PLAN:
|
|
||||||
return 'Plan mode - Analyze only, do not modify files or execute commands';
|
|
||||||
case ApprovalMode.DEFAULT:
|
|
||||||
return 'Default mode - Require approval for file edits or shell commands';
|
|
||||||
case ApprovalMode.AUTO_EDIT:
|
|
||||||
return 'Auto-edit mode - Automatically approve file edits';
|
|
||||||
case ApprovalMode.YOLO:
|
|
||||||
return 'YOLO mode - Automatically approve all tools';
|
|
||||||
default:
|
|
||||||
return `${mode} mode`;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
const parseApprovalArgs = (
|
|
||||||
args: string,
|
|
||||||
): {
|
|
||||||
mode: string | null;
|
|
||||||
scope: 'session' | 'user' | 'project';
|
|
||||||
error?: string;
|
|
||||||
} => {
|
|
||||||
const trimmedArgs = args.trim();
|
|
||||||
if (!trimmedArgs) {
|
|
||||||
return { mode: null, scope: 'session' };
|
|
||||||
}
|
|
||||||
|
|
||||||
const tokens = tokenizeArgs(trimmedArgs);
|
|
||||||
let mode: string | null = null;
|
|
||||||
let scope: 'session' | 'user' | 'project' = 'session';
|
|
||||||
let scopeFlag: string | null = null;
|
|
||||||
|
|
||||||
// Find scope flag and mode
|
|
||||||
for (const token of tokens) {
|
|
||||||
if (token === '--session' || token === '--user' || token === '--project') {
|
|
||||||
if (scopeFlag) {
|
|
||||||
return {
|
|
||||||
mode: null,
|
|
||||||
scope: 'session',
|
|
||||||
error: 'Multiple scope flags provided',
|
|
||||||
};
|
|
||||||
}
|
|
||||||
scopeFlag = token;
|
|
||||||
scope = token.substring(2) as 'session' | 'user' | 'project';
|
|
||||||
} else if (!mode) {
|
|
||||||
mode = token;
|
|
||||||
} else {
|
|
||||||
return {
|
|
||||||
mode: null,
|
|
||||||
scope: 'session',
|
|
||||||
error: 'Invalid arguments provided',
|
|
||||||
};
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!mode) {
|
|
||||||
return { mode: null, scope: 'session', error: 'Missing approval mode' };
|
|
||||||
}
|
|
||||||
|
|
||||||
return { mode, scope };
|
|
||||||
};
|
|
||||||
|
|
||||||
const setApprovalModeWithScope = async (
|
|
||||||
context: CommandContext,
|
|
||||||
mode: ApprovalMode,
|
|
||||||
scope: 'session' | 'user' | 'project',
|
|
||||||
): Promise<MessageActionReturn> => {
|
|
||||||
const { services } = context;
|
|
||||||
const { config } = services;
|
|
||||||
|
|
||||||
if (!config) {
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'error',
|
|
||||||
content: 'Configuration not available.',
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Always set the mode in the current session
|
|
||||||
config.setApprovalMode(mode);
|
|
||||||
|
|
||||||
// If scope is not session, also persist to settings
|
|
||||||
if (scope !== 'session') {
|
|
||||||
const { settings } = context.services;
|
|
||||||
if (!settings || typeof settings.setValue !== 'function') {
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'error',
|
|
||||||
content:
|
|
||||||
'Settings service is not available; unable to persist the approval mode.',
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
const settingScope =
|
|
||||||
scope === 'user' ? SettingScope.User : SettingScope.Workspace;
|
|
||||||
const scopeLabel = scope === 'user' ? 'user' : 'project';
|
|
||||||
let settingsPath: string | undefined;
|
|
||||||
|
|
||||||
try {
|
|
||||||
if (typeof settings.forScope === 'function') {
|
|
||||||
settingsPath = settings.forScope(settingScope)?.path;
|
|
||||||
}
|
|
||||||
} catch (_error) {
|
|
||||||
settingsPath = undefined;
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
settings.setValue(settingScope, 'approvalMode', mode);
|
|
||||||
} catch (error) {
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'error',
|
|
||||||
content: `Failed to save approval mode: ${(error as Error).message}`,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
const locationSuffix = settingsPath ? ` at ${settingsPath}` : '';
|
|
||||||
|
|
||||||
const scopeSuffix = ` (saved to ${scopeLabel} settings${locationSuffix})`;
|
|
||||||
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'info',
|
|
||||||
content: `Approval mode changed to: ${mode}${scopeSuffix}`,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'info',
|
|
||||||
content: `Approval mode changed to: ${mode}`,
|
|
||||||
};
|
|
||||||
} catch (error) {
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'error',
|
|
||||||
content: `Failed to change approval mode: ${(error as Error).message}`,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
export const approvalModeCommand: SlashCommand = {
|
export const approvalModeCommand: SlashCommand = {
|
||||||
name: 'approval-mode',
|
name: 'approval-mode',
|
||||||
description: 'View or change the approval mode for tool usage',
|
description: 'View or change the approval mode for tool usage',
|
||||||
kind: CommandKind.BUILT_IN,
|
kind: CommandKind.BUILT_IN,
|
||||||
action: async (
|
action: async (
|
||||||
context: CommandContext,
|
_context: CommandContext,
|
||||||
args: string,
|
_args: string,
|
||||||
): Promise<MessageActionReturn> => {
|
): Promise<OpenDialogActionReturn> => ({
|
||||||
const { config } = context.services;
|
type: 'dialog',
|
||||||
if (!config) {
|
dialog: 'approval-mode',
|
||||||
return {
|
}),
|
||||||
type: 'message',
|
|
||||||
messageType: 'error',
|
|
||||||
content: 'Configuration not available.',
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// If no arguments provided, show current mode and available options
|
|
||||||
if (!args || args.trim() === '') {
|
|
||||||
const currentMode =
|
|
||||||
typeof config.getApprovalMode === 'function'
|
|
||||||
? config.getApprovalMode()
|
|
||||||
: null;
|
|
||||||
|
|
||||||
const messageLines: string[] = [];
|
|
||||||
|
|
||||||
if (currentMode) {
|
|
||||||
messageLines.push(`Current approval mode: ${currentMode}`);
|
|
||||||
messageLines.push('');
|
|
||||||
}
|
|
||||||
|
|
||||||
messageLines.push('Available approval modes:');
|
|
||||||
for (const mode of APPROVAL_MODES) {
|
|
||||||
messageLines.push(` - ${mode}: ${formatModeDescription(mode)}`);
|
|
||||||
}
|
|
||||||
messageLines.push('');
|
|
||||||
messageLines.push(USAGE_MESSAGE);
|
|
||||||
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'info',
|
|
||||||
content: messageLines.join('\n'),
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse arguments flexibly
|
|
||||||
const parsed = parseApprovalArgs(args);
|
|
||||||
|
|
||||||
if (parsed.error) {
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'error',
|
|
||||||
content: `${parsed.error}. ${USAGE_MESSAGE}`,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!parsed.mode) {
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'info',
|
|
||||||
content: USAGE_MESSAGE,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
const requestedMode = parseApprovalMode(parsed.mode);
|
|
||||||
|
|
||||||
if (!requestedMode) {
|
|
||||||
let message = `Invalid approval mode: ${parsed.mode}\n\n`;
|
|
||||||
message += 'Available approval modes:\n';
|
|
||||||
for (const mode of APPROVAL_MODES) {
|
|
||||||
message += ` - ${mode}: ${formatModeDescription(mode)}\n`;
|
|
||||||
}
|
|
||||||
message += `\n${USAGE_MESSAGE}`;
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'error',
|
|
||||||
content: message,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
return setApprovalModeWithScope(context, requestedMode, parsed.scope);
|
|
||||||
},
|
|
||||||
subCommands: APPROVAL_MODES.map((mode) => ({
|
|
||||||
name: mode,
|
|
||||||
description: formatModeDescription(mode),
|
|
||||||
kind: CommandKind.BUILT_IN,
|
|
||||||
subCommands: [
|
|
||||||
{
|
|
||||||
name: '--session',
|
|
||||||
description: 'Apply to current session only (temporary)',
|
|
||||||
kind: CommandKind.BUILT_IN,
|
|
||||||
action: async (
|
|
||||||
context: CommandContext,
|
|
||||||
args: string,
|
|
||||||
): Promise<MessageActionReturn> => {
|
|
||||||
if (args.trim().length > 0) {
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'error',
|
|
||||||
content: 'Scope subcommands do not accept additional arguments.',
|
|
||||||
};
|
|
||||||
}
|
|
||||||
return setApprovalModeWithScope(context, mode, 'session');
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: '--project',
|
|
||||||
description: 'Persist for this project/workspace',
|
|
||||||
kind: CommandKind.BUILT_IN,
|
|
||||||
action: async (
|
|
||||||
context: CommandContext,
|
|
||||||
args: string,
|
|
||||||
): Promise<MessageActionReturn> => {
|
|
||||||
if (args.trim().length > 0) {
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'error',
|
|
||||||
content: 'Scope subcommands do not accept additional arguments.',
|
|
||||||
};
|
|
||||||
}
|
|
||||||
return setApprovalModeWithScope(context, mode, 'project');
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: '--user',
|
|
||||||
description: 'Persist for this user on this machine',
|
|
||||||
kind: CommandKind.BUILT_IN,
|
|
||||||
action: async (
|
|
||||||
context: CommandContext,
|
|
||||||
args: string,
|
|
||||||
): Promise<MessageActionReturn> => {
|
|
||||||
if (args.trim().length > 0) {
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'error',
|
|
||||||
content: 'Scope subcommands do not accept additional arguments.',
|
|
||||||
};
|
|
||||||
}
|
|
||||||
return setApprovalModeWithScope(context, mode, 'user');
|
|
||||||
},
|
|
||||||
},
|
|
||||||
],
|
|
||||||
action: async (
|
|
||||||
context: CommandContext,
|
|
||||||
args: string,
|
|
||||||
): Promise<MessageActionReturn> => {
|
|
||||||
if (args.trim().length > 0) {
|
|
||||||
// Allow users who type `/approval-mode plan --user` via the subcommand path
|
|
||||||
const parsed = parseApprovalArgs(`${mode} ${args}`);
|
|
||||||
if (parsed.error) {
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'error',
|
|
||||||
content: `${parsed.error}. ${USAGE_MESSAGE}`,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
const normalizedMode = parseApprovalMode(parsed.mode);
|
|
||||||
if (!normalizedMode) {
|
|
||||||
return {
|
|
||||||
type: 'message',
|
|
||||||
messageType: 'error',
|
|
||||||
content: `Invalid approval mode: ${parsed.mode}. ${USAGE_MESSAGE}`,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
return setApprovalModeWithScope(context, normalizedMode, parsed.scope);
|
|
||||||
}
|
|
||||||
|
|
||||||
return setApprovalModeWithScope(context, mode, 'session');
|
|
||||||
},
|
|
||||||
})),
|
|
||||||
completion: async (_context: CommandContext, partialArg: string) => {
|
|
||||||
const tokens = tokenizeArgs(partialArg);
|
|
||||||
const hasTrailingSpace = /\s$/.test(partialArg);
|
|
||||||
const currentSegment = hasTrailingSpace
|
|
||||||
? ''
|
|
||||||
: tokens.length > 0
|
|
||||||
? tokens[tokens.length - 1]
|
|
||||||
: '';
|
|
||||||
|
|
||||||
const normalizedCurrent = normalizeInputMode(currentSegment).replace(
|
|
||||||
/_/g,
|
|
||||||
'-',
|
|
||||||
);
|
|
||||||
|
|
||||||
const scopeValues = ['--session', '--project', '--user'];
|
|
||||||
|
|
||||||
const normalizeToken = (token: string) =>
|
|
||||||
normalizeInputMode(token).replace(/_/g, '-');
|
|
||||||
|
|
||||||
const normalizedTokens = tokens.map(normalizeToken);
|
|
||||||
|
|
||||||
if (tokens.length === 0) {
|
|
||||||
if (currentSegment.startsWith('-')) {
|
|
||||||
return scopeValues.filter((scope) => scope.startsWith(currentSegment));
|
|
||||||
}
|
|
||||||
return APPROVAL_MODES;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (tokens.length === 1 && !hasTrailingSpace) {
|
|
||||||
const originalToken = tokens[0];
|
|
||||||
if (originalToken.startsWith('-')) {
|
|
||||||
return scopeValues.filter((scope) =>
|
|
||||||
scope.startsWith(normalizedCurrent),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
return APPROVAL_MODES.filter((mode) =>
|
|
||||||
mode.startsWith(normalizedCurrent),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (tokens.length === 1 && hasTrailingSpace) {
|
|
||||||
const normalizedFirst = normalizedTokens[0];
|
|
||||||
if (scopeValues.includes(tokens[0])) {
|
|
||||||
return APPROVAL_MODES;
|
|
||||||
}
|
|
||||||
if (APPROVAL_MODES.includes(normalizedFirst as ApprovalMode)) {
|
|
||||||
return scopeValues;
|
|
||||||
}
|
|
||||||
return APPROVAL_MODES;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (tokens.length === 2 && !hasTrailingSpace) {
|
|
||||||
const normalizedFirst = normalizedTokens[0];
|
|
||||||
if (scopeValues.includes(tokens[0])) {
|
|
||||||
return APPROVAL_MODES.filter((mode) =>
|
|
||||||
mode.startsWith(normalizedCurrent),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
if (APPROVAL_MODES.includes(normalizedFirst as ApprovalMode)) {
|
|
||||||
return scopeValues.filter((scope) =>
|
|
||||||
scope.startsWith(normalizedCurrent),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
return [];
|
|
||||||
}
|
|
||||||
|
|
||||||
return [];
|
|
||||||
},
|
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -8,41 +8,34 @@ import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
|||||||
import open from 'open';
|
import open from 'open';
|
||||||
import { bugCommand } from './bugCommand.js';
|
import { bugCommand } from './bugCommand.js';
|
||||||
import { createMockCommandContext } from '../../test-utils/mockCommandContext.js';
|
import { createMockCommandContext } from '../../test-utils/mockCommandContext.js';
|
||||||
import { getCliVersion } from '../../utils/version.js';
|
|
||||||
import { GIT_COMMIT_INFO } from '../../generated/git-commit.js';
|
import { GIT_COMMIT_INFO } from '../../generated/git-commit.js';
|
||||||
import { formatMemoryUsage } from '../utils/formatters.js';
|
import { AuthType } from '@qwen-code/qwen-code-core';
|
||||||
|
import * as systemInfoUtils from '../../utils/systemInfo.js';
|
||||||
|
|
||||||
// Mock dependencies
|
// Mock dependencies
|
||||||
vi.mock('open');
|
vi.mock('open');
|
||||||
vi.mock('../../utils/version.js');
|
vi.mock('../../utils/systemInfo.js');
|
||||||
vi.mock('../utils/formatters.js');
|
|
||||||
vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => {
|
|
||||||
const actual =
|
|
||||||
await importOriginal<typeof import('@qwen-code/qwen-code-core')>();
|
|
||||||
return {
|
|
||||||
...actual,
|
|
||||||
IdeClient: {
|
|
||||||
getInstance: () => ({
|
|
||||||
getDetectedIdeDisplayName: vi.fn().mockReturnValue('VSCode'),
|
|
||||||
}),
|
|
||||||
},
|
|
||||||
sessionId: 'test-session-id',
|
|
||||||
};
|
|
||||||
});
|
|
||||||
vi.mock('node:process', () => ({
|
|
||||||
default: {
|
|
||||||
platform: 'test-platform',
|
|
||||||
version: 'v20.0.0',
|
|
||||||
// Keep other necessary process properties if needed by other parts of the code
|
|
||||||
env: process.env,
|
|
||||||
memoryUsage: () => ({ rss: 0 }),
|
|
||||||
},
|
|
||||||
}));
|
|
||||||
|
|
||||||
describe('bugCommand', () => {
|
describe('bugCommand', () => {
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
vi.mocked(getCliVersion).mockResolvedValue('0.1.0');
|
vi.mocked(systemInfoUtils.getExtendedSystemInfo).mockResolvedValue({
|
||||||
vi.mocked(formatMemoryUsage).mockReturnValue('100 MB');
|
cliVersion: '0.1.0',
|
||||||
|
osPlatform: 'test-platform',
|
||||||
|
osArch: 'x64',
|
||||||
|
osRelease: '22.0.0',
|
||||||
|
nodeVersion: 'v20.0.0',
|
||||||
|
npmVersion: '10.0.0',
|
||||||
|
sandboxEnv: 'test',
|
||||||
|
modelVersion: 'qwen3-coder-plus',
|
||||||
|
selectedAuthType: '',
|
||||||
|
ideClient: 'VSCode',
|
||||||
|
sessionId: 'test-session-id',
|
||||||
|
memoryUsage: '100 MB',
|
||||||
|
gitCommit:
|
||||||
|
GIT_COMMIT_INFO && !['N/A'].includes(GIT_COMMIT_INFO)
|
||||||
|
? GIT_COMMIT_INFO
|
||||||
|
: undefined,
|
||||||
|
});
|
||||||
vi.stubEnv('SANDBOX', 'qwen-test');
|
vi.stubEnv('SANDBOX', 'qwen-test');
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -55,9 +48,7 @@ describe('bugCommand', () => {
|
|||||||
const mockContext = createMockCommandContext({
|
const mockContext = createMockCommandContext({
|
||||||
services: {
|
services: {
|
||||||
config: {
|
config: {
|
||||||
getModel: () => 'qwen3-coder-plus',
|
|
||||||
getBugCommand: () => undefined,
|
getBugCommand: () => undefined,
|
||||||
getIdeMode: () => true,
|
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
});
|
});
|
||||||
@@ -65,13 +56,21 @@ describe('bugCommand', () => {
|
|||||||
if (!bugCommand.action) throw new Error('Action is not defined');
|
if (!bugCommand.action) throw new Error('Action is not defined');
|
||||||
await bugCommand.action(mockContext, 'A test bug');
|
await bugCommand.action(mockContext, 'A test bug');
|
||||||
|
|
||||||
|
const gitCommitLine =
|
||||||
|
GIT_COMMIT_INFO && !['N/A'].includes(GIT_COMMIT_INFO)
|
||||||
|
? `* **Git Commit:** ${GIT_COMMIT_INFO}\n`
|
||||||
|
: '';
|
||||||
const expectedInfo = `
|
const expectedInfo = `
|
||||||
* **CLI Version:** 0.1.0
|
* **CLI Version:** 0.1.0
|
||||||
* **Git Commit:** ${GIT_COMMIT_INFO}
|
${gitCommitLine}* **Model:** qwen3-coder-plus
|
||||||
|
* **Sandbox:** test
|
||||||
|
* **OS Platform:** test-platform
|
||||||
|
* **OS Arch:** x64
|
||||||
|
* **OS Release:** 22.0.0
|
||||||
|
* **Node.js Version:** v20.0.0
|
||||||
|
* **NPM Version:** 10.0.0
|
||||||
* **Session ID:** test-session-id
|
* **Session ID:** test-session-id
|
||||||
* **Operating System:** test-platform v20.0.0
|
* **Auth Method:**
|
||||||
* **Sandbox Environment:** test
|
|
||||||
* **Model Version:** qwen3-coder-plus
|
|
||||||
* **Memory Usage:** 100 MB
|
* **Memory Usage:** 100 MB
|
||||||
* **IDE Client:** VSCode
|
* **IDE Client:** VSCode
|
||||||
`;
|
`;
|
||||||
@@ -88,9 +87,7 @@ describe('bugCommand', () => {
|
|||||||
const mockContext = createMockCommandContext({
|
const mockContext = createMockCommandContext({
|
||||||
services: {
|
services: {
|
||||||
config: {
|
config: {
|
||||||
getModel: () => 'qwen3-coder-plus',
|
|
||||||
getBugCommand: () => ({ urlTemplate: customTemplate }),
|
getBugCommand: () => ({ urlTemplate: customTemplate }),
|
||||||
getIdeMode: () => true,
|
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
});
|
});
|
||||||
@@ -98,13 +95,21 @@ describe('bugCommand', () => {
|
|||||||
if (!bugCommand.action) throw new Error('Action is not defined');
|
if (!bugCommand.action) throw new Error('Action is not defined');
|
||||||
await bugCommand.action(mockContext, 'A custom bug');
|
await bugCommand.action(mockContext, 'A custom bug');
|
||||||
|
|
||||||
|
const gitCommitLine =
|
||||||
|
GIT_COMMIT_INFO && !['N/A'].includes(GIT_COMMIT_INFO)
|
||||||
|
? `* **Git Commit:** ${GIT_COMMIT_INFO}\n`
|
||||||
|
: '';
|
||||||
const expectedInfo = `
|
const expectedInfo = `
|
||||||
* **CLI Version:** 0.1.0
|
* **CLI Version:** 0.1.0
|
||||||
* **Git Commit:** ${GIT_COMMIT_INFO}
|
${gitCommitLine}* **Model:** qwen3-coder-plus
|
||||||
|
* **Sandbox:** test
|
||||||
|
* **OS Platform:** test-platform
|
||||||
|
* **OS Arch:** x64
|
||||||
|
* **OS Release:** 22.0.0
|
||||||
|
* **Node.js Version:** v20.0.0
|
||||||
|
* **NPM Version:** 10.0.0
|
||||||
* **Session ID:** test-session-id
|
* **Session ID:** test-session-id
|
||||||
* **Operating System:** test-platform v20.0.0
|
* **Auth Method:**
|
||||||
* **Sandbox Environment:** test
|
|
||||||
* **Model Version:** qwen3-coder-plus
|
|
||||||
* **Memory Usage:** 100 MB
|
* **Memory Usage:** 100 MB
|
||||||
* **IDE Client:** VSCode
|
* **IDE Client:** VSCode
|
||||||
`;
|
`;
|
||||||
@@ -114,4 +119,62 @@ describe('bugCommand', () => {
|
|||||||
|
|
||||||
expect(open).toHaveBeenCalledWith(expectedUrl);
|
expect(open).toHaveBeenCalledWith(expectedUrl);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should include Base URL when auth type is OpenAI', async () => {
|
||||||
|
vi.mocked(systemInfoUtils.getExtendedSystemInfo).mockResolvedValue({
|
||||||
|
cliVersion: '0.1.0',
|
||||||
|
osPlatform: 'test-platform',
|
||||||
|
osArch: 'x64',
|
||||||
|
osRelease: '22.0.0',
|
||||||
|
nodeVersion: 'v20.0.0',
|
||||||
|
npmVersion: '10.0.0',
|
||||||
|
sandboxEnv: 'test',
|
||||||
|
modelVersion: 'qwen3-coder-plus',
|
||||||
|
selectedAuthType: AuthType.USE_OPENAI,
|
||||||
|
ideClient: 'VSCode',
|
||||||
|
sessionId: 'test-session-id',
|
||||||
|
memoryUsage: '100 MB',
|
||||||
|
baseUrl: 'https://api.openai.com/v1',
|
||||||
|
gitCommit:
|
||||||
|
GIT_COMMIT_INFO && !['N/A'].includes(GIT_COMMIT_INFO)
|
||||||
|
? GIT_COMMIT_INFO
|
||||||
|
: undefined,
|
||||||
|
});
|
||||||
|
|
||||||
|
const mockContext = createMockCommandContext({
|
||||||
|
services: {
|
||||||
|
config: {
|
||||||
|
getBugCommand: () => undefined,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!bugCommand.action) throw new Error('Action is not defined');
|
||||||
|
await bugCommand.action(mockContext, 'OpenAI bug');
|
||||||
|
|
||||||
|
const gitCommitLine =
|
||||||
|
GIT_COMMIT_INFO && !['N/A'].includes(GIT_COMMIT_INFO)
|
||||||
|
? `* **Git Commit:** ${GIT_COMMIT_INFO}\n`
|
||||||
|
: '';
|
||||||
|
const expectedInfo = `
|
||||||
|
* **CLI Version:** 0.1.0
|
||||||
|
${gitCommitLine}* **Model:** qwen3-coder-plus
|
||||||
|
* **Sandbox:** test
|
||||||
|
* **OS Platform:** test-platform
|
||||||
|
* **OS Arch:** x64
|
||||||
|
* **OS Release:** 22.0.0
|
||||||
|
* **Node.js Version:** v20.0.0
|
||||||
|
* **NPM Version:** 10.0.0
|
||||||
|
* **Session ID:** test-session-id
|
||||||
|
* **Auth Method:** ${AuthType.USE_OPENAI}
|
||||||
|
* **Base URL:** https://api.openai.com/v1
|
||||||
|
* **Memory Usage:** 100 MB
|
||||||
|
* **IDE Client:** VSCode
|
||||||
|
`;
|
||||||
|
const expectedUrl =
|
||||||
|
'https://github.com/QwenLM/qwen-code/issues/new?template=bug_report.yml&title=OpenAI%20bug&info=' +
|
||||||
|
encodeURIComponent(expectedInfo);
|
||||||
|
|
||||||
|
expect(open).toHaveBeenCalledWith(expectedUrl);
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -5,17 +5,17 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import open from 'open';
|
import open from 'open';
|
||||||
import process from 'node:process';
|
|
||||||
import {
|
import {
|
||||||
type CommandContext,
|
type CommandContext,
|
||||||
type SlashCommand,
|
type SlashCommand,
|
||||||
CommandKind,
|
CommandKind,
|
||||||
} from './types.js';
|
} from './types.js';
|
||||||
import { MessageType } from '../types.js';
|
import { MessageType } from '../types.js';
|
||||||
import { GIT_COMMIT_INFO } from '../../generated/git-commit.js';
|
import { getExtendedSystemInfo } from '../../utils/systemInfo.js';
|
||||||
import { formatMemoryUsage } from '../utils/formatters.js';
|
import {
|
||||||
import { getCliVersion } from '../../utils/version.js';
|
getSystemInfoFields,
|
||||||
import { IdeClient, sessionId } from '@qwen-code/qwen-code-core';
|
getFieldValue,
|
||||||
|
} from '../../utils/systemInfoFields.js';
|
||||||
|
|
||||||
export const bugCommand: SlashCommand = {
|
export const bugCommand: SlashCommand = {
|
||||||
name: 'bug',
|
name: 'bug',
|
||||||
@@ -23,39 +23,20 @@ export const bugCommand: SlashCommand = {
|
|||||||
kind: CommandKind.BUILT_IN,
|
kind: CommandKind.BUILT_IN,
|
||||||
action: async (context: CommandContext, args?: string): Promise<void> => {
|
action: async (context: CommandContext, args?: string): Promise<void> => {
|
||||||
const bugDescription = (args || '').trim();
|
const bugDescription = (args || '').trim();
|
||||||
const { config } = context.services;
|
const systemInfo = await getExtendedSystemInfo(context);
|
||||||
|
|
||||||
const osVersion = `${process.platform} ${process.version}`;
|
const fields = getSystemInfoFields(systemInfo);
|
||||||
let sandboxEnv = 'no sandbox';
|
|
||||||
if (process.env['SANDBOX'] && process.env['SANDBOX'] !== 'sandbox-exec') {
|
|
||||||
sandboxEnv = process.env['SANDBOX'].replace(/^qwen-(?:code-)?/, '');
|
|
||||||
} else if (process.env['SANDBOX'] === 'sandbox-exec') {
|
|
||||||
sandboxEnv = `sandbox-exec (${
|
|
||||||
process.env['SEATBELT_PROFILE'] || 'unknown'
|
|
||||||
})`;
|
|
||||||
}
|
|
||||||
const modelVersion = config?.getModel() || 'Unknown';
|
|
||||||
const cliVersion = await getCliVersion();
|
|
||||||
const memoryUsage = formatMemoryUsage(process.memoryUsage().rss);
|
|
||||||
const ideClient = await getIdeClientName(context);
|
|
||||||
|
|
||||||
let info = `
|
// Generate bug report info using the same field configuration
|
||||||
* **CLI Version:** ${cliVersion}
|
let info = '\n';
|
||||||
* **Git Commit:** ${GIT_COMMIT_INFO}
|
for (const field of fields) {
|
||||||
* **Session ID:** ${sessionId}
|
info += `* **${field.label}:** ${getFieldValue(field, systemInfo)}\n`;
|
||||||
* **Operating System:** ${osVersion}
|
|
||||||
* **Sandbox Environment:** ${sandboxEnv}
|
|
||||||
* **Model Version:** ${modelVersion}
|
|
||||||
* **Memory Usage:** ${memoryUsage}
|
|
||||||
`;
|
|
||||||
if (ideClient) {
|
|
||||||
info += `* **IDE Client:** ${ideClient}\n`;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
let bugReportUrl =
|
let bugReportUrl =
|
||||||
'https://github.com/QwenLM/qwen-code/issues/new?template=bug_report.yml&title={title}&info={info}';
|
'https://github.com/QwenLM/qwen-code/issues/new?template=bug_report.yml&title={title}&info={info}';
|
||||||
|
|
||||||
const bugCommandSettings = config?.getBugCommand();
|
const bugCommandSettings = context.services.config?.getBugCommand();
|
||||||
if (bugCommandSettings?.urlTemplate) {
|
if (bugCommandSettings?.urlTemplate) {
|
||||||
bugReportUrl = bugCommandSettings.urlTemplate;
|
bugReportUrl = bugCommandSettings.urlTemplate;
|
||||||
}
|
}
|
||||||
@@ -87,11 +68,3 @@ export const bugCommand: SlashCommand = {
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
async function getIdeClientName(context: CommandContext) {
|
|
||||||
if (!context.services.config?.getIdeMode()) {
|
|
||||||
return '';
|
|
||||||
}
|
|
||||||
const ideClient = await IdeClient.getInstance();
|
|
||||||
return ideClient.getDetectedIdeDisplayName() ?? '';
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -139,8 +139,8 @@ describe('chatCommand', () => {
|
|||||||
.match(/(\d{4}-\d{2}-\d{2})T(\d{2}:\d{2}:\d{2})/);
|
.match(/(\d{4}-\d{2}-\d{2})T(\d{2}:\d{2}:\d{2})/);
|
||||||
const formattedDate = isoDate ? `${isoDate[1]} ${isoDate[2]}` : '';
|
const formattedDate = isoDate ? `${isoDate[1]} ${isoDate[2]}` : '';
|
||||||
expect(content).toContain(formattedDate);
|
expect(content).toContain(formattedDate);
|
||||||
const index1 = content.indexOf('- \u001b[36mtest1\u001b[0m');
|
const index1 = content.indexOf('- test1');
|
||||||
const index2 = content.indexOf('- \u001b[36mtest2\u001b[0m');
|
const index2 = content.indexOf('- test2');
|
||||||
expect(index1).toBeGreaterThanOrEqual(0);
|
expect(index1).toBeGreaterThanOrEqual(0);
|
||||||
expect(index2).toBeGreaterThan(index1);
|
expect(index2).toBeGreaterThan(index1);
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -89,9 +89,9 @@ const listCommand: SlashCommand = {
|
|||||||
const isoString = chat.mtime.toISOString();
|
const isoString = chat.mtime.toISOString();
|
||||||
const match = isoString.match(/(\d{4}-\d{2}-\d{2})T(\d{2}:\d{2}:\d{2})/);
|
const match = isoString.match(/(\d{4}-\d{2}-\d{2})T(\d{2}:\d{2}:\d{2})/);
|
||||||
const formattedDate = match ? `${match[1]} ${match[2]}` : 'Invalid Date';
|
const formattedDate = match ? `${match[1]} ${match[2]}` : 'Invalid Date';
|
||||||
message += ` - \u001b[36m${paddedName}\u001b[0m \u001b[90m(saved on ${formattedDate})\u001b[0m\n`;
|
message += ` - ${paddedName} (saved on ${formattedDate})\n`;
|
||||||
}
|
}
|
||||||
message += `\n\u001b[90mNote: Newest last, oldest first\u001b[0m`;
|
message += `\nNote: Newest last, oldest first`;
|
||||||
return {
|
return {
|
||||||
type: 'message',
|
type: 'message',
|
||||||
messageType: 'info',
|
messageType: 'info',
|
||||||
|
|||||||
@@ -129,7 +129,8 @@ export interface OpenDialogActionReturn {
|
|||||||
| 'model'
|
| 'model'
|
||||||
| 'subagent_create'
|
| 'subagent_create'
|
||||||
| 'subagent_list'
|
| 'subagent_list'
|
||||||
| 'permissions';
|
| 'permissions'
|
||||||
|
| 'approval-mode';
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -7,127 +7,46 @@
|
|||||||
import type React from 'react';
|
import type React from 'react';
|
||||||
import { Box, Text } from 'ink';
|
import { Box, Text } from 'ink';
|
||||||
import { theme } from '../semantic-colors.js';
|
import { theme } from '../semantic-colors.js';
|
||||||
import { GIT_COMMIT_INFO } from '../../generated/git-commit.js';
|
import type { ExtendedSystemInfo } from '../../utils/systemInfo.js';
|
||||||
|
import {
|
||||||
|
getSystemInfoFields,
|
||||||
|
getFieldValue,
|
||||||
|
type SystemInfoField,
|
||||||
|
} from '../../utils/systemInfoFields.js';
|
||||||
|
|
||||||
interface AboutBoxProps {
|
type AboutBoxProps = ExtendedSystemInfo;
|
||||||
cliVersion: string;
|
|
||||||
osVersion: string;
|
|
||||||
sandboxEnv: string;
|
|
||||||
modelVersion: string;
|
|
||||||
selectedAuthType: string;
|
|
||||||
gcpProject: string;
|
|
||||||
ideClient: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
export const AboutBox: React.FC<AboutBoxProps> = ({
|
export const AboutBox: React.FC<AboutBoxProps> = (props) => {
|
||||||
cliVersion,
|
const fields = getSystemInfoFields(props);
|
||||||
osVersion,
|
|
||||||
sandboxEnv,
|
return (
|
||||||
modelVersion,
|
<Box
|
||||||
selectedAuthType,
|
borderStyle="round"
|
||||||
gcpProject,
|
borderColor={theme.border.default}
|
||||||
ideClient,
|
flexDirection="column"
|
||||||
}) => (
|
padding={1}
|
||||||
<Box
|
marginY={1}
|
||||||
borderStyle="round"
|
width="100%"
|
||||||
borderColor={theme.border.default}
|
>
|
||||||
flexDirection="column"
|
<Box marginBottom={1}>
|
||||||
padding={1}
|
<Text bold color={theme.text.accent}>
|
||||||
marginY={1}
|
About Qwen Code
|
||||||
width="100%"
|
|
||||||
>
|
|
||||||
<Box marginBottom={1}>
|
|
||||||
<Text bold color={theme.text.accent}>
|
|
||||||
About Qwen Code
|
|
||||||
</Text>
|
|
||||||
</Box>
|
|
||||||
<Box flexDirection="row">
|
|
||||||
<Box width="35%">
|
|
||||||
<Text bold color={theme.text.link}>
|
|
||||||
CLI Version
|
|
||||||
</Text>
|
</Text>
|
||||||
</Box>
|
</Box>
|
||||||
<Box>
|
{fields.map((field: SystemInfoField) => (
|
||||||
<Text color={theme.text.primary}>{cliVersion}</Text>
|
<Box key={field.key} flexDirection="row">
|
||||||
</Box>
|
<Box width="35%">
|
||||||
|
<Text bold color={theme.text.link}>
|
||||||
|
{field.label}
|
||||||
|
</Text>
|
||||||
|
</Box>
|
||||||
|
<Box>
|
||||||
|
<Text color={theme.text.primary}>
|
||||||
|
{getFieldValue(field, props)}
|
||||||
|
</Text>
|
||||||
|
</Box>
|
||||||
|
</Box>
|
||||||
|
))}
|
||||||
</Box>
|
</Box>
|
||||||
{GIT_COMMIT_INFO && !['N/A'].includes(GIT_COMMIT_INFO) && (
|
);
|
||||||
<Box flexDirection="row">
|
};
|
||||||
<Box width="35%">
|
|
||||||
<Text bold color={theme.text.link}>
|
|
||||||
Git Commit
|
|
||||||
</Text>
|
|
||||||
</Box>
|
|
||||||
<Box>
|
|
||||||
<Text color={theme.text.primary}>{GIT_COMMIT_INFO}</Text>
|
|
||||||
</Box>
|
|
||||||
</Box>
|
|
||||||
)}
|
|
||||||
<Box flexDirection="row">
|
|
||||||
<Box width="35%">
|
|
||||||
<Text bold color={theme.text.link}>
|
|
||||||
Model
|
|
||||||
</Text>
|
|
||||||
</Box>
|
|
||||||
<Box>
|
|
||||||
<Text color={theme.text.primary}>{modelVersion}</Text>
|
|
||||||
</Box>
|
|
||||||
</Box>
|
|
||||||
<Box flexDirection="row">
|
|
||||||
<Box width="35%">
|
|
||||||
<Text bold color={theme.text.link}>
|
|
||||||
Sandbox
|
|
||||||
</Text>
|
|
||||||
</Box>
|
|
||||||
<Box>
|
|
||||||
<Text color={theme.text.primary}>{sandboxEnv}</Text>
|
|
||||||
</Box>
|
|
||||||
</Box>
|
|
||||||
<Box flexDirection="row">
|
|
||||||
<Box width="35%">
|
|
||||||
<Text bold color={theme.text.link}>
|
|
||||||
OS
|
|
||||||
</Text>
|
|
||||||
</Box>
|
|
||||||
<Box>
|
|
||||||
<Text color={theme.text.primary}>{osVersion}</Text>
|
|
||||||
</Box>
|
|
||||||
</Box>
|
|
||||||
<Box flexDirection="row">
|
|
||||||
<Box width="35%">
|
|
||||||
<Text bold color={theme.text.link}>
|
|
||||||
Auth Method
|
|
||||||
</Text>
|
|
||||||
</Box>
|
|
||||||
<Box>
|
|
||||||
<Text color={theme.text.primary}>
|
|
||||||
{selectedAuthType.startsWith('oauth') ? 'OAuth' : selectedAuthType}
|
|
||||||
</Text>
|
|
||||||
</Box>
|
|
||||||
</Box>
|
|
||||||
{gcpProject && (
|
|
||||||
<Box flexDirection="row">
|
|
||||||
<Box width="35%">
|
|
||||||
<Text bold color={theme.text.link}>
|
|
||||||
GCP Project
|
|
||||||
</Text>
|
|
||||||
</Box>
|
|
||||||
<Box>
|
|
||||||
<Text color={theme.text.primary}>{gcpProject}</Text>
|
|
||||||
</Box>
|
|
||||||
</Box>
|
|
||||||
)}
|
|
||||||
{ideClient && (
|
|
||||||
<Box flexDirection="row">
|
|
||||||
<Box width="35%">
|
|
||||||
<Text bold color={theme.text.link}>
|
|
||||||
IDE Client
|
|
||||||
</Text>
|
|
||||||
</Box>
|
|
||||||
<Box>
|
|
||||||
<Text color={theme.text.primary}>{ideClient}</Text>
|
|
||||||
</Box>
|
|
||||||
</Box>
|
|
||||||
)}
|
|
||||||
</Box>
|
|
||||||
);
|
|
||||||
|
|||||||
183
packages/cli/src/ui/components/ApprovalModeDialog.tsx
Normal file
183
packages/cli/src/ui/components/ApprovalModeDialog.tsx
Normal file
@@ -0,0 +1,183 @@
|
|||||||
|
/**
|
||||||
|
* @license
|
||||||
|
* Copyright 2025 Qwen
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*/
|
||||||
|
|
||||||
|
import type React from 'react';
|
||||||
|
import { useCallback, useState } from 'react';
|
||||||
|
import { Box, Text } from 'ink';
|
||||||
|
import { theme } from '../semantic-colors.js';
|
||||||
|
import { ApprovalMode, APPROVAL_MODES } from '@qwen-code/qwen-code-core';
|
||||||
|
import { RadioButtonSelect } from './shared/RadioButtonSelect.js';
|
||||||
|
import type { LoadedSettings } from '../../config/settings.js';
|
||||||
|
import { SettingScope } from '../../config/settings.js';
|
||||||
|
import { getScopeMessageForSetting } from '../../utils/dialogScopeUtils.js';
|
||||||
|
import { useKeypress } from '../hooks/useKeypress.js';
|
||||||
|
import { ScopeSelector } from './shared/ScopeSelector.js';
|
||||||
|
|
||||||
|
interface ApprovalModeDialogProps {
|
||||||
|
/** Callback function when an approval mode is selected */
|
||||||
|
onSelect: (mode: ApprovalMode | undefined, scope: SettingScope) => void;
|
||||||
|
|
||||||
|
/** The settings object */
|
||||||
|
settings: LoadedSettings;
|
||||||
|
|
||||||
|
/** Current approval mode */
|
||||||
|
currentMode: ApprovalMode;
|
||||||
|
|
||||||
|
/** Available terminal height for layout calculations */
|
||||||
|
availableTerminalHeight?: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
const formatModeDescription = (mode: ApprovalMode): string => {
|
||||||
|
switch (mode) {
|
||||||
|
case ApprovalMode.PLAN:
|
||||||
|
return 'Analyze only, do not modify files or execute commands';
|
||||||
|
case ApprovalMode.DEFAULT:
|
||||||
|
return 'Require approval for file edits or shell commands';
|
||||||
|
case ApprovalMode.AUTO_EDIT:
|
||||||
|
return 'Automatically approve file edits';
|
||||||
|
case ApprovalMode.YOLO:
|
||||||
|
return 'Automatically approve all tools';
|
||||||
|
default:
|
||||||
|
return `${mode} mode`;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
export function ApprovalModeDialog({
|
||||||
|
onSelect,
|
||||||
|
settings,
|
||||||
|
currentMode,
|
||||||
|
availableTerminalHeight: _availableTerminalHeight,
|
||||||
|
}: ApprovalModeDialogProps): React.JSX.Element {
|
||||||
|
// Start with User scope by default
|
||||||
|
const [selectedScope, setSelectedScope] = useState<SettingScope>(
|
||||||
|
SettingScope.User,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Track the currently highlighted approval mode
|
||||||
|
const [highlightedMode, setHighlightedMode] = useState<ApprovalMode>(
|
||||||
|
currentMode || ApprovalMode.DEFAULT,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Generate approval mode items with inline descriptions
|
||||||
|
const modeItems = APPROVAL_MODES.map((mode) => ({
|
||||||
|
label: `${mode} - ${formatModeDescription(mode)}`,
|
||||||
|
value: mode,
|
||||||
|
key: mode,
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Find the index of the current mode
|
||||||
|
const initialModeIndex = modeItems.findIndex(
|
||||||
|
(item) => item.value === highlightedMode,
|
||||||
|
);
|
||||||
|
const safeInitialModeIndex = initialModeIndex >= 0 ? initialModeIndex : 0;
|
||||||
|
|
||||||
|
const handleModeSelect = useCallback(
|
||||||
|
(mode: ApprovalMode) => {
|
||||||
|
onSelect(mode, selectedScope);
|
||||||
|
},
|
||||||
|
[onSelect, selectedScope],
|
||||||
|
);
|
||||||
|
|
||||||
|
const handleModeHighlight = (mode: ApprovalMode) => {
|
||||||
|
setHighlightedMode(mode);
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleScopeHighlight = useCallback((scope: SettingScope) => {
|
||||||
|
setSelectedScope(scope);
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
const handleScopeSelect = useCallback(
|
||||||
|
(scope: SettingScope) => {
|
||||||
|
onSelect(highlightedMode, scope);
|
||||||
|
},
|
||||||
|
[onSelect, highlightedMode],
|
||||||
|
);
|
||||||
|
|
||||||
|
const [focusSection, setFocusSection] = useState<'mode' | 'scope'>('mode');
|
||||||
|
|
||||||
|
useKeypress(
|
||||||
|
(key) => {
|
||||||
|
if (key.name === 'tab') {
|
||||||
|
setFocusSection((prev) => (prev === 'mode' ? 'scope' : 'mode'));
|
||||||
|
}
|
||||||
|
if (key.name === 'escape') {
|
||||||
|
onSelect(undefined, selectedScope);
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{ isActive: true },
|
||||||
|
);
|
||||||
|
|
||||||
|
// Generate scope message for approval mode setting
|
||||||
|
const otherScopeModifiedMessage = getScopeMessageForSetting(
|
||||||
|
'tools.approvalMode',
|
||||||
|
selectedScope,
|
||||||
|
settings,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Check if user scope is selected but workspace has the setting
|
||||||
|
const showWorkspacePriorityWarning =
|
||||||
|
selectedScope === SettingScope.User &&
|
||||||
|
otherScopeModifiedMessage.toLowerCase().includes('workspace');
|
||||||
|
|
||||||
|
return (
|
||||||
|
<Box
|
||||||
|
borderStyle="round"
|
||||||
|
borderColor={theme.border.default}
|
||||||
|
flexDirection="row"
|
||||||
|
padding={1}
|
||||||
|
width="100%"
|
||||||
|
height="100%"
|
||||||
|
>
|
||||||
|
<Box flexDirection="column" flexGrow={1}>
|
||||||
|
{/* Approval Mode Selection */}
|
||||||
|
<Text bold={focusSection === 'mode'} wrap="truncate">
|
||||||
|
{focusSection === 'mode' ? '> ' : ' '}Approval Mode{' '}
|
||||||
|
<Text color={theme.text.secondary}>{otherScopeModifiedMessage}</Text>
|
||||||
|
</Text>
|
||||||
|
<Box height={1} />
|
||||||
|
<RadioButtonSelect
|
||||||
|
items={modeItems}
|
||||||
|
initialIndex={safeInitialModeIndex}
|
||||||
|
onSelect={handleModeSelect}
|
||||||
|
onHighlight={handleModeHighlight}
|
||||||
|
isFocused={focusSection === 'mode'}
|
||||||
|
maxItemsToShow={10}
|
||||||
|
showScrollArrows={false}
|
||||||
|
showNumbers={focusSection === 'mode'}
|
||||||
|
/>
|
||||||
|
|
||||||
|
<Box height={1} />
|
||||||
|
|
||||||
|
{/* Scope Selection */}
|
||||||
|
<Box marginTop={1}>
|
||||||
|
<ScopeSelector
|
||||||
|
onSelect={handleScopeSelect}
|
||||||
|
onHighlight={handleScopeHighlight}
|
||||||
|
isFocused={focusSection === 'scope'}
|
||||||
|
initialScope={selectedScope}
|
||||||
|
/>
|
||||||
|
</Box>
|
||||||
|
|
||||||
|
<Box height={1} />
|
||||||
|
|
||||||
|
{/* Warning when workspace setting will override user setting */}
|
||||||
|
{showWorkspacePriorityWarning && (
|
||||||
|
<>
|
||||||
|
<Text color={theme.status.warning} wrap="wrap">
|
||||||
|
⚠ Workspace approval mode exists and takes priority. User-level
|
||||||
|
change will have no effect.
|
||||||
|
</Text>
|
||||||
|
<Box height={1} />
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<Text color={theme.text.secondary}>
|
||||||
|
(Use Enter to select, Tab to change focus)
|
||||||
|
</Text>
|
||||||
|
</Box>
|
||||||
|
</Box>
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -20,6 +20,7 @@ import { WorkspaceMigrationDialog } from './WorkspaceMigrationDialog.js';
|
|||||||
import { ProQuotaDialog } from './ProQuotaDialog.js';
|
import { ProQuotaDialog } from './ProQuotaDialog.js';
|
||||||
import { PermissionsModifyTrustDialog } from './PermissionsModifyTrustDialog.js';
|
import { PermissionsModifyTrustDialog } from './PermissionsModifyTrustDialog.js';
|
||||||
import { ModelDialog } from './ModelDialog.js';
|
import { ModelDialog } from './ModelDialog.js';
|
||||||
|
import { ApprovalModeDialog } from './ApprovalModeDialog.js';
|
||||||
import { theme } from '../semantic-colors.js';
|
import { theme } from '../semantic-colors.js';
|
||||||
import { useUIState } from '../contexts/UIStateContext.js';
|
import { useUIState } from '../contexts/UIStateContext.js';
|
||||||
import { useUIActions } from '../contexts/UIActionsContext.js';
|
import { useUIActions } from '../contexts/UIActionsContext.js';
|
||||||
@@ -180,6 +181,22 @@ export const DialogManager = ({
|
|||||||
onSelect={() => uiActions.closeSettingsDialog()}
|
onSelect={() => uiActions.closeSettingsDialog()}
|
||||||
onRestartRequest={() => process.exit(0)}
|
onRestartRequest={() => process.exit(0)}
|
||||||
availableTerminalHeight={terminalHeight - staticExtraHeight}
|
availableTerminalHeight={terminalHeight - staticExtraHeight}
|
||||||
|
config={config}
|
||||||
|
/>
|
||||||
|
</Box>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
if (uiState.isApprovalModeDialogOpen) {
|
||||||
|
const currentMode = config.getApprovalMode();
|
||||||
|
return (
|
||||||
|
<Box flexDirection="column">
|
||||||
|
<ApprovalModeDialog
|
||||||
|
settings={settings}
|
||||||
|
currentMode={currentMode}
|
||||||
|
onSelect={uiActions.handleApprovalModeSelect}
|
||||||
|
availableTerminalHeight={
|
||||||
|
constrainHeight ? terminalHeight - staticExtraHeight : undefined
|
||||||
|
}
|
||||||
/>
|
/>
|
||||||
</Box>
|
</Box>
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -71,15 +71,24 @@ describe('<HistoryItemDisplay />', () => {
|
|||||||
|
|
||||||
it('renders AboutBox for "about" type', () => {
|
it('renders AboutBox for "about" type', () => {
|
||||||
const item: HistoryItem = {
|
const item: HistoryItem = {
|
||||||
...baseItem,
|
id: 1,
|
||||||
type: MessageType.ABOUT,
|
type: MessageType.ABOUT,
|
||||||
cliVersion: '1.0.0',
|
systemInfo: {
|
||||||
osVersion: 'test-os',
|
cliVersion: '1.0.0',
|
||||||
sandboxEnv: 'test-env',
|
osPlatform: 'test-os',
|
||||||
modelVersion: 'test-model',
|
osArch: 'x64',
|
||||||
selectedAuthType: 'test-auth',
|
osRelease: '22.0.0',
|
||||||
gcpProject: 'test-project',
|
nodeVersion: 'v20.0.0',
|
||||||
ideClient: 'test-ide',
|
npmVersion: '10.0.0',
|
||||||
|
sandboxEnv: 'test-env',
|
||||||
|
modelVersion: 'test-model',
|
||||||
|
selectedAuthType: 'test-auth',
|
||||||
|
ideClient: 'test-ide',
|
||||||
|
sessionId: 'test-session-id',
|
||||||
|
memoryUsage: '100 MB',
|
||||||
|
baseUrl: undefined,
|
||||||
|
gitCommit: undefined,
|
||||||
|
},
|
||||||
};
|
};
|
||||||
const { lastFrame } = renderWithProviders(
|
const { lastFrame } = renderWithProviders(
|
||||||
<HistoryItemDisplay {...baseItem} item={item} />,
|
<HistoryItemDisplay {...baseItem} item={item} />,
|
||||||
|
|||||||
@@ -95,15 +95,7 @@ const HistoryItemDisplayComponent: React.FC<HistoryItemDisplayProps> = ({
|
|||||||
<ErrorMessage text={itemForDisplay.text} />
|
<ErrorMessage text={itemForDisplay.text} />
|
||||||
)}
|
)}
|
||||||
{itemForDisplay.type === 'about' && (
|
{itemForDisplay.type === 'about' && (
|
||||||
<AboutBox
|
<AboutBox {...itemForDisplay.systemInfo} />
|
||||||
cliVersion={itemForDisplay.cliVersion}
|
|
||||||
osVersion={itemForDisplay.osVersion}
|
|
||||||
sandboxEnv={itemForDisplay.sandboxEnv}
|
|
||||||
modelVersion={itemForDisplay.modelVersion}
|
|
||||||
selectedAuthType={itemForDisplay.selectedAuthType}
|
|
||||||
gcpProject={itemForDisplay.gcpProject}
|
|
||||||
ideClient={itemForDisplay.ideClient}
|
|
||||||
/>
|
|
||||||
)}
|
)}
|
||||||
{itemForDisplay.type === 'help' && commands && (
|
{itemForDisplay.type === 'help' && commands && (
|
||||||
<Help commands={commands} />
|
<Help commands={commands} />
|
||||||
|
|||||||
@@ -130,7 +130,7 @@ export function OpenAIKeyPrompt({
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Handle regular character input
|
// Handle regular character input
|
||||||
if (key.sequence && !key.ctrl && !key.meta && !key.name) {
|
if (key.sequence && !key.ctrl && !key.meta) {
|
||||||
// Filter control characters
|
// Filter control characters
|
||||||
const cleanInput = key.sequence
|
const cleanInput = key.sequence
|
||||||
.split('')
|
.split('')
|
||||||
|
|||||||
@@ -9,11 +9,8 @@ import { Box, Text } from 'ink';
|
|||||||
import { theme } from '../semantic-colors.js';
|
import { theme } from '../semantic-colors.js';
|
||||||
import type { LoadedSettings, Settings } from '../../config/settings.js';
|
import type { LoadedSettings, Settings } from '../../config/settings.js';
|
||||||
import { SettingScope } from '../../config/settings.js';
|
import { SettingScope } from '../../config/settings.js';
|
||||||
import {
|
import { getScopeMessageForSetting } from '../../utils/dialogScopeUtils.js';
|
||||||
getScopeItems,
|
import { ScopeSelector } from './shared/ScopeSelector.js';
|
||||||
getScopeMessageForSetting,
|
|
||||||
} from '../../utils/dialogScopeUtils.js';
|
|
||||||
import { RadioButtonSelect } from './shared/RadioButtonSelect.js';
|
|
||||||
import {
|
import {
|
||||||
getDialogSettingKeys,
|
getDialogSettingKeys,
|
||||||
setPendingSettingValue,
|
setPendingSettingValue,
|
||||||
@@ -30,6 +27,7 @@ import {
|
|||||||
getEffectiveValue,
|
getEffectiveValue,
|
||||||
} from '../../utils/settingsUtils.js';
|
} from '../../utils/settingsUtils.js';
|
||||||
import { useVimMode } from '../contexts/VimModeContext.js';
|
import { useVimMode } from '../contexts/VimModeContext.js';
|
||||||
|
import { type Config } from '@qwen-code/qwen-code-core';
|
||||||
import { useKeypress } from '../hooks/useKeypress.js';
|
import { useKeypress } from '../hooks/useKeypress.js';
|
||||||
import chalk from 'chalk';
|
import chalk from 'chalk';
|
||||||
import { cpSlice, cpLen, stripUnsafeCharacters } from '../utils/textUtils.js';
|
import { cpSlice, cpLen, stripUnsafeCharacters } from '../utils/textUtils.js';
|
||||||
@@ -43,6 +41,7 @@ interface SettingsDialogProps {
|
|||||||
onSelect: (settingName: string | undefined, scope: SettingScope) => void;
|
onSelect: (settingName: string | undefined, scope: SettingScope) => void;
|
||||||
onRestartRequest?: () => void;
|
onRestartRequest?: () => void;
|
||||||
availableTerminalHeight?: number;
|
availableTerminalHeight?: number;
|
||||||
|
config?: Config;
|
||||||
}
|
}
|
||||||
|
|
||||||
const maxItemsToShow = 8;
|
const maxItemsToShow = 8;
|
||||||
@@ -52,6 +51,7 @@ export function SettingsDialog({
|
|||||||
onSelect,
|
onSelect,
|
||||||
onRestartRequest,
|
onRestartRequest,
|
||||||
availableTerminalHeight,
|
availableTerminalHeight,
|
||||||
|
config,
|
||||||
}: SettingsDialogProps): React.JSX.Element {
|
}: SettingsDialogProps): React.JSX.Element {
|
||||||
// Get vim mode context to sync vim mode changes
|
// Get vim mode context to sync vim mode changes
|
||||||
const { vimEnabled, toggleVimEnabled } = useVimMode();
|
const { vimEnabled, toggleVimEnabled } = useVimMode();
|
||||||
@@ -184,6 +184,21 @@ export function SettingsDialog({
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Special handling for approval mode to apply to current session
|
||||||
|
if (
|
||||||
|
key === 'tools.approvalMode' &&
|
||||||
|
settings.merged.tools?.approvalMode
|
||||||
|
) {
|
||||||
|
try {
|
||||||
|
config?.setApprovalMode(settings.merged.tools.approvalMode);
|
||||||
|
} catch (error) {
|
||||||
|
console.error(
|
||||||
|
'Failed to apply approval mode to current session:',
|
||||||
|
error,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Remove from modifiedSettings since it's now saved
|
// Remove from modifiedSettings since it's now saved
|
||||||
setModifiedSettings((prev) => {
|
setModifiedSettings((prev) => {
|
||||||
const updated = new Set(prev);
|
const updated = new Set(prev);
|
||||||
@@ -357,12 +372,6 @@ export function SettingsDialog({
|
|||||||
setEditCursorPos(0);
|
setEditCursorPos(0);
|
||||||
};
|
};
|
||||||
|
|
||||||
// Scope selector items
|
|
||||||
const scopeItems = getScopeItems().map((item) => ({
|
|
||||||
...item,
|
|
||||||
key: item.value,
|
|
||||||
}));
|
|
||||||
|
|
||||||
const handleScopeHighlight = (scope: SettingScope) => {
|
const handleScopeHighlight = (scope: SettingScope) => {
|
||||||
setSelectedScope(scope);
|
setSelectedScope(scope);
|
||||||
};
|
};
|
||||||
@@ -616,7 +625,11 @@ export function SettingsDialog({
|
|||||||
prev,
|
prev,
|
||||||
),
|
),
|
||||||
);
|
);
|
||||||
} else if (defType === 'number' || defType === 'string') {
|
} else if (
|
||||||
|
defType === 'number' ||
|
||||||
|
defType === 'string' ||
|
||||||
|
defType === 'enum'
|
||||||
|
) {
|
||||||
if (
|
if (
|
||||||
typeof defaultValue === 'number' ||
|
typeof defaultValue === 'number' ||
|
||||||
typeof defaultValue === 'string'
|
typeof defaultValue === 'string'
|
||||||
@@ -673,6 +686,21 @@ export function SettingsDialog({
|
|||||||
selectedScope,
|
selectedScope,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
// Special handling for approval mode to apply to current session
|
||||||
|
if (
|
||||||
|
currentSetting.value === 'tools.approvalMode' &&
|
||||||
|
settings.merged.tools?.approvalMode
|
||||||
|
) {
|
||||||
|
try {
|
||||||
|
config?.setApprovalMode(settings.merged.tools.approvalMode);
|
||||||
|
} catch (error) {
|
||||||
|
console.error(
|
||||||
|
'Failed to apply approval mode to current session:',
|
||||||
|
error,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Remove from global pending changes if present
|
// Remove from global pending changes if present
|
||||||
setGlobalPendingChanges((prev) => {
|
setGlobalPendingChanges((prev) => {
|
||||||
if (!prev.has(currentSetting.value)) return prev;
|
if (!prev.has(currentSetting.value)) return prev;
|
||||||
@@ -876,19 +904,12 @@ export function SettingsDialog({
|
|||||||
|
|
||||||
{/* Scope Selection - conditionally visible based on height constraints */}
|
{/* Scope Selection - conditionally visible based on height constraints */}
|
||||||
{showScopeSelection && (
|
{showScopeSelection && (
|
||||||
<Box marginTop={1} flexDirection="column">
|
<Box marginTop={1}>
|
||||||
<Text bold={focusSection === 'scope'} wrap="truncate">
|
<ScopeSelector
|
||||||
{focusSection === 'scope' ? '> ' : ' '}Apply To
|
|
||||||
</Text>
|
|
||||||
<RadioButtonSelect
|
|
||||||
items={scopeItems}
|
|
||||||
initialIndex={scopeItems.findIndex(
|
|
||||||
(item) => item.value === selectedScope,
|
|
||||||
)}
|
|
||||||
onSelect={handleScopeSelect}
|
onSelect={handleScopeSelect}
|
||||||
onHighlight={handleScopeHighlight}
|
onHighlight={handleScopeHighlight}
|
||||||
isFocused={focusSection === 'scope'}
|
isFocused={focusSection === 'scope'}
|
||||||
showNumbers={focusSection === 'scope'}
|
initialScope={selectedScope}
|
||||||
/>
|
/>
|
||||||
</Box>
|
</Box>
|
||||||
)}
|
)}
|
||||||
|
|||||||
@@ -28,7 +28,6 @@ exports[`SettingsDialog > Snapshot Tests > should render default state correctly
|
|||||||
│ Apply To │
|
│ Apply To │
|
||||||
│ ● User Settings │
|
│ ● User Settings │
|
||||||
│ Workspace Settings │
|
│ Workspace Settings │
|
||||||
│ System Settings │
|
|
||||||
│ │
|
│ │
|
||||||
│ (Use Enter to select, Tab to change focus) │
|
│ (Use Enter to select, Tab to change focus) │
|
||||||
│ │
|
│ │
|
||||||
@@ -63,7 +62,6 @@ exports[`SettingsDialog > Snapshot Tests > should render focused on scope select
|
|||||||
│ Apply To │
|
│ Apply To │
|
||||||
│ ● User Settings │
|
│ ● User Settings │
|
||||||
│ Workspace Settings │
|
│ Workspace Settings │
|
||||||
│ System Settings │
|
|
||||||
│ │
|
│ │
|
||||||
│ (Use Enter to select, Tab to change focus) │
|
│ (Use Enter to select, Tab to change focus) │
|
||||||
│ │
|
│ │
|
||||||
@@ -98,7 +96,6 @@ exports[`SettingsDialog > Snapshot Tests > should render with accessibility sett
|
|||||||
│ Apply To │
|
│ Apply To │
|
||||||
│ ● User Settings │
|
│ ● User Settings │
|
||||||
│ Workspace Settings │
|
│ Workspace Settings │
|
||||||
│ System Settings │
|
|
||||||
│ │
|
│ │
|
||||||
│ (Use Enter to select, Tab to change focus) │
|
│ (Use Enter to select, Tab to change focus) │
|
||||||
│ │
|
│ │
|
||||||
@@ -133,7 +130,6 @@ exports[`SettingsDialog > Snapshot Tests > should render with all boolean settin
|
|||||||
│ Apply To │
|
│ Apply To │
|
||||||
│ ● User Settings │
|
│ ● User Settings │
|
||||||
│ Workspace Settings │
|
│ Workspace Settings │
|
||||||
│ System Settings │
|
|
||||||
│ │
|
│ │
|
||||||
│ (Use Enter to select, Tab to change focus) │
|
│ (Use Enter to select, Tab to change focus) │
|
||||||
│ │
|
│ │
|
||||||
@@ -168,7 +164,6 @@ exports[`SettingsDialog > Snapshot Tests > should render with different scope se
|
|||||||
│ Apply To │
|
│ Apply To │
|
||||||
│ ● User Settings │
|
│ ● User Settings │
|
||||||
│ Workspace Settings │
|
│ Workspace Settings │
|
||||||
│ System Settings │
|
|
||||||
│ │
|
│ │
|
||||||
│ (Use Enter to select, Tab to change focus) │
|
│ (Use Enter to select, Tab to change focus) │
|
||||||
│ │
|
│ │
|
||||||
@@ -203,7 +198,6 @@ exports[`SettingsDialog > Snapshot Tests > should render with different scope se
|
|||||||
│ Apply To │
|
│ Apply To │
|
||||||
│ ● User Settings │
|
│ ● User Settings │
|
||||||
│ Workspace Settings │
|
│ Workspace Settings │
|
||||||
│ System Settings │
|
|
||||||
│ │
|
│ │
|
||||||
│ (Use Enter to select, Tab to change focus) │
|
│ (Use Enter to select, Tab to change focus) │
|
||||||
│ │
|
│ │
|
||||||
@@ -238,7 +232,6 @@ exports[`SettingsDialog > Snapshot Tests > should render with file filtering set
|
|||||||
│ Apply To │
|
│ Apply To │
|
||||||
│ ● User Settings │
|
│ ● User Settings │
|
||||||
│ Workspace Settings │
|
│ Workspace Settings │
|
||||||
│ System Settings │
|
|
||||||
│ │
|
│ │
|
||||||
│ (Use Enter to select, Tab to change focus) │
|
│ (Use Enter to select, Tab to change focus) │
|
||||||
│ │
|
│ │
|
||||||
@@ -273,7 +266,6 @@ exports[`SettingsDialog > Snapshot Tests > should render with mixed boolean and
|
|||||||
│ Apply To │
|
│ Apply To │
|
||||||
│ ● User Settings │
|
│ ● User Settings │
|
||||||
│ Workspace Settings │
|
│ Workspace Settings │
|
||||||
│ System Settings │
|
|
||||||
│ │
|
│ │
|
||||||
│ (Use Enter to select, Tab to change focus) │
|
│ (Use Enter to select, Tab to change focus) │
|
||||||
│ │
|
│ │
|
||||||
@@ -308,7 +300,6 @@ exports[`SettingsDialog > Snapshot Tests > should render with tools and security
|
|||||||
│ Apply To │
|
│ Apply To │
|
||||||
│ ● User Settings │
|
│ ● User Settings │
|
||||||
│ Workspace Settings │
|
│ Workspace Settings │
|
||||||
│ System Settings │
|
|
||||||
│ │
|
│ │
|
||||||
│ (Use Enter to select, Tab to change focus) │
|
│ (Use Enter to select, Tab to change focus) │
|
||||||
│ │
|
│ │
|
||||||
@@ -343,7 +334,6 @@ exports[`SettingsDialog > Snapshot Tests > should render with various boolean se
|
|||||||
│ Apply To │
|
│ Apply To │
|
||||||
│ ● User Settings │
|
│ ● User Settings │
|
||||||
│ Workspace Settings │
|
│ Workspace Settings │
|
||||||
│ System Settings │
|
|
||||||
│ │
|
│ │
|
||||||
│ (Use Enter to select, Tab to change focus) │
|
│ (Use Enter to select, Tab to change focus) │
|
||||||
│ │
|
│ │
|
||||||
|
|||||||
@@ -6,7 +6,6 @@ exports[`ThemeDialog Snapshots > should render correctly in scope selector mode
|
|||||||
│ > Apply To │
|
│ > Apply To │
|
||||||
│ ● 1. User Settings │
|
│ ● 1. User Settings │
|
||||||
│ 2. Workspace Settings │
|
│ 2. Workspace Settings │
|
||||||
│ 3. System Settings │
|
|
||||||
│ │
|
│ │
|
||||||
│ (Use Enter to apply scope, Tab to select theme) │
|
│ (Use Enter to apply scope, Tab to select theme) │
|
||||||
│ │
|
│ │
|
||||||
|
|||||||
@@ -12,6 +12,7 @@ import type {
|
|||||||
Config,
|
Config,
|
||||||
} from '@qwen-code/qwen-code-core';
|
} from '@qwen-code/qwen-code-core';
|
||||||
import { renderWithProviders } from '../../../test-utils/render.js';
|
import { renderWithProviders } from '../../../test-utils/render.js';
|
||||||
|
import type { LoadedSettings } from '../../../config/settings.js';
|
||||||
|
|
||||||
describe('ToolConfirmationMessage', () => {
|
describe('ToolConfirmationMessage', () => {
|
||||||
const mockConfig = {
|
const mockConfig = {
|
||||||
@@ -187,4 +188,63 @@ describe('ToolConfirmationMessage', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
|
describe('external editor option', () => {
|
||||||
|
const editConfirmationDetails: ToolCallConfirmationDetails = {
|
||||||
|
type: 'edit',
|
||||||
|
title: 'Confirm Edit',
|
||||||
|
fileName: 'test.txt',
|
||||||
|
filePath: '/test.txt',
|
||||||
|
fileDiff: '...diff...',
|
||||||
|
originalContent: 'a',
|
||||||
|
newContent: 'b',
|
||||||
|
onConfirm: vi.fn(),
|
||||||
|
};
|
||||||
|
|
||||||
|
it('should show "Modify with external editor" when preferredEditor is set', () => {
|
||||||
|
const mockConfig = {
|
||||||
|
isTrustedFolder: () => true,
|
||||||
|
getIdeMode: () => false,
|
||||||
|
} as unknown as Config;
|
||||||
|
|
||||||
|
const { lastFrame } = renderWithProviders(
|
||||||
|
<ToolConfirmationMessage
|
||||||
|
confirmationDetails={editConfirmationDetails}
|
||||||
|
config={mockConfig}
|
||||||
|
availableTerminalHeight={30}
|
||||||
|
terminalWidth={80}
|
||||||
|
/>,
|
||||||
|
{
|
||||||
|
settings: {
|
||||||
|
merged: { general: { preferredEditor: 'vscode' } },
|
||||||
|
} as unknown as LoadedSettings,
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(lastFrame()).toContain('Modify with external editor');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should NOT show "Modify with external editor" when preferredEditor is not set', () => {
|
||||||
|
const mockConfig = {
|
||||||
|
isTrustedFolder: () => true,
|
||||||
|
getIdeMode: () => false,
|
||||||
|
} as unknown as Config;
|
||||||
|
|
||||||
|
const { lastFrame } = renderWithProviders(
|
||||||
|
<ToolConfirmationMessage
|
||||||
|
confirmationDetails={editConfirmationDetails}
|
||||||
|
config={mockConfig}
|
||||||
|
availableTerminalHeight={30}
|
||||||
|
terminalWidth={80}
|
||||||
|
/>,
|
||||||
|
{
|
||||||
|
settings: {
|
||||||
|
merged: { general: {} },
|
||||||
|
} as unknown as LoadedSettings,
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(lastFrame()).not.toContain('Modify with external editor');
|
||||||
|
});
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -15,12 +15,14 @@ import type {
|
|||||||
ToolExecuteConfirmationDetails,
|
ToolExecuteConfirmationDetails,
|
||||||
ToolMcpConfirmationDetails,
|
ToolMcpConfirmationDetails,
|
||||||
Config,
|
Config,
|
||||||
|
EditorType,
|
||||||
} from '@qwen-code/qwen-code-core';
|
} from '@qwen-code/qwen-code-core';
|
||||||
import { IdeClient, ToolConfirmationOutcome } from '@qwen-code/qwen-code-core';
|
import { IdeClient, ToolConfirmationOutcome } from '@qwen-code/qwen-code-core';
|
||||||
import type { RadioSelectItem } from '../shared/RadioButtonSelect.js';
|
import type { RadioSelectItem } from '../shared/RadioButtonSelect.js';
|
||||||
import { RadioButtonSelect } from '../shared/RadioButtonSelect.js';
|
import { RadioButtonSelect } from '../shared/RadioButtonSelect.js';
|
||||||
import { MaxSizedBox } from '../shared/MaxSizedBox.js';
|
import { MaxSizedBox } from '../shared/MaxSizedBox.js';
|
||||||
import { useKeypress } from '../../hooks/useKeypress.js';
|
import { useKeypress } from '../../hooks/useKeypress.js';
|
||||||
|
import { useSettings } from '../../contexts/SettingsContext.js';
|
||||||
import { theme } from '../../semantic-colors.js';
|
import { theme } from '../../semantic-colors.js';
|
||||||
|
|
||||||
export interface ToolConfirmationMessageProps {
|
export interface ToolConfirmationMessageProps {
|
||||||
@@ -45,6 +47,11 @@ export const ToolConfirmationMessage: React.FC<
|
|||||||
const { onConfirm } = confirmationDetails;
|
const { onConfirm } = confirmationDetails;
|
||||||
const childWidth = terminalWidth - 2; // 2 for padding
|
const childWidth = terminalWidth - 2; // 2 for padding
|
||||||
|
|
||||||
|
const settings = useSettings();
|
||||||
|
const preferredEditor = settings.merged.general?.preferredEditor as
|
||||||
|
| EditorType
|
||||||
|
| undefined;
|
||||||
|
|
||||||
const [ideClient, setIdeClient] = useState<IdeClient | null>(null);
|
const [ideClient, setIdeClient] = useState<IdeClient | null>(null);
|
||||||
const [isDiffingEnabled, setIsDiffingEnabled] = useState(false);
|
const [isDiffingEnabled, setIsDiffingEnabled] = useState(false);
|
||||||
|
|
||||||
@@ -199,7 +206,7 @@ export const ToolConfirmationMessage: React.FC<
|
|||||||
key: 'Yes, allow always',
|
key: 'Yes, allow always',
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
if (!config.getIdeMode() || !isDiffingEnabled) {
|
if ((!config.getIdeMode() || !isDiffingEnabled) && preferredEditor) {
|
||||||
options.push({
|
options.push({
|
||||||
label: 'Modify with external editor',
|
label: 'Modify with external editor',
|
||||||
value: ToolConfirmationOutcome.ModifyWithEditor,
|
value: ToolConfirmationOutcome.ModifyWithEditor,
|
||||||
|
|||||||
@@ -23,7 +23,7 @@ export const ToolsList: React.FC<ToolsListProps> = ({
|
|||||||
}) => (
|
}) => (
|
||||||
<Box flexDirection="column" marginBottom={1}>
|
<Box flexDirection="column" marginBottom={1}>
|
||||||
<Text bold color={theme.text.primary}>
|
<Text bold color={theme.text.primary}>
|
||||||
Available Gemini CLI tools:
|
Available Qwen Code CLI tools:
|
||||||
</Text>
|
</Text>
|
||||||
<Box height={1} />
|
<Box height={1} />
|
||||||
{tools.length > 0 ? (
|
{tools.length > 0 ? (
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
// Vitest Snapshot v1, https://vitest.dev/guide/snapshot.html
|
// Vitest Snapshot v1, https://vitest.dev/guide/snapshot.html
|
||||||
|
|
||||||
exports[`<ToolsList /> > renders correctly with descriptions 1`] = `
|
exports[`<ToolsList /> > renders correctly with descriptions 1`] = `
|
||||||
"Available Gemini CLI tools:
|
"Available Qwen Code CLI tools:
|
||||||
|
|
||||||
- Test Tool One (test-tool-one)
|
- Test Tool One (test-tool-one)
|
||||||
This is the first test tool.
|
This is the first test tool.
|
||||||
@@ -16,14 +16,14 @@ exports[`<ToolsList /> > renders correctly with descriptions 1`] = `
|
|||||||
`;
|
`;
|
||||||
|
|
||||||
exports[`<ToolsList /> > renders correctly with no tools 1`] = `
|
exports[`<ToolsList /> > renders correctly with no tools 1`] = `
|
||||||
"Available Gemini CLI tools:
|
"Available Qwen Code CLI tools:
|
||||||
|
|
||||||
No tools available
|
No tools available
|
||||||
"
|
"
|
||||||
`;
|
`;
|
||||||
|
|
||||||
exports[`<ToolsList /> > renders correctly without descriptions 1`] = `
|
exports[`<ToolsList /> > renders correctly without descriptions 1`] = `
|
||||||
"Available Gemini CLI tools:
|
"Available Qwen Code CLI tools:
|
||||||
|
|
||||||
- Test Tool One
|
- Test Tool One
|
||||||
- Test Tool Two
|
- Test Tool Two
|
||||||
|
|||||||
@@ -8,7 +8,11 @@ import { createContext, useContext } from 'react';
|
|||||||
import { type Key } from '../hooks/useKeypress.js';
|
import { type Key } from '../hooks/useKeypress.js';
|
||||||
import { type IdeIntegrationNudgeResult } from '../IdeIntegrationNudge.js';
|
import { type IdeIntegrationNudgeResult } from '../IdeIntegrationNudge.js';
|
||||||
import { type FolderTrustChoice } from '../components/FolderTrustDialog.js';
|
import { type FolderTrustChoice } from '../components/FolderTrustDialog.js';
|
||||||
import { type AuthType, type EditorType } from '@qwen-code/qwen-code-core';
|
import {
|
||||||
|
type AuthType,
|
||||||
|
type EditorType,
|
||||||
|
type ApprovalMode,
|
||||||
|
} from '@qwen-code/qwen-code-core';
|
||||||
import { type SettingScope } from '../../config/settings.js';
|
import { type SettingScope } from '../../config/settings.js';
|
||||||
import type { AuthState } from '../types.js';
|
import type { AuthState } from '../types.js';
|
||||||
import { type VisionSwitchOutcome } from '../components/ModelSwitchDialog.js';
|
import { type VisionSwitchOutcome } from '../components/ModelSwitchDialog.js';
|
||||||
@@ -19,6 +23,10 @@ export interface UIActions {
|
|||||||
scope: SettingScope,
|
scope: SettingScope,
|
||||||
) => void;
|
) => void;
|
||||||
handleThemeHighlight: (themeName: string | undefined) => void;
|
handleThemeHighlight: (themeName: string | undefined) => void;
|
||||||
|
handleApprovalModeSelect: (
|
||||||
|
mode: ApprovalMode | undefined,
|
||||||
|
scope: SettingScope,
|
||||||
|
) => void;
|
||||||
handleAuthSelect: (
|
handleAuthSelect: (
|
||||||
authType: AuthType | undefined,
|
authType: AuthType | undefined,
|
||||||
scope: SettingScope,
|
scope: SettingScope,
|
||||||
|
|||||||
@@ -69,6 +69,7 @@ export interface UIState {
|
|||||||
isSettingsDialogOpen: boolean;
|
isSettingsDialogOpen: boolean;
|
||||||
isModelDialogOpen: boolean;
|
isModelDialogOpen: boolean;
|
||||||
isPermissionsDialogOpen: boolean;
|
isPermissionsDialogOpen: boolean;
|
||||||
|
isApprovalModeDialogOpen: boolean;
|
||||||
slashCommands: readonly SlashCommand[];
|
slashCommands: readonly SlashCommand[];
|
||||||
pendingSlashCommandHistoryItems: HistoryItemWithoutId[];
|
pendingSlashCommandHistoryItems: HistoryItemWithoutId[];
|
||||||
commandContext: CommandContext;
|
commandContext: CommandContext;
|
||||||
|
|||||||
@@ -80,6 +80,8 @@ describe('handleAtCommand', () => {
|
|||||||
getReadManyFilesExcludes: () => [],
|
getReadManyFilesExcludes: () => [],
|
||||||
}),
|
}),
|
||||||
getUsageStatisticsEnabled: () => false,
|
getUsageStatisticsEnabled: () => false,
|
||||||
|
getTruncateToolOutputThreshold: () => 2500,
|
||||||
|
getTruncateToolOutputLines: () => 500,
|
||||||
} as unknown as Config;
|
} as unknown as Config;
|
||||||
|
|
||||||
const registry = new ToolRegistry(mockConfig);
|
const registry = new ToolRegistry(mockConfig);
|
||||||
|
|||||||
@@ -48,6 +48,7 @@ interface SlashCommandProcessorActions {
|
|||||||
openSettingsDialog: () => void;
|
openSettingsDialog: () => void;
|
||||||
openModelDialog: () => void;
|
openModelDialog: () => void;
|
||||||
openPermissionsDialog: () => void;
|
openPermissionsDialog: () => void;
|
||||||
|
openApprovalModeDialog: () => void;
|
||||||
quit: (messages: HistoryItem[]) => void;
|
quit: (messages: HistoryItem[]) => void;
|
||||||
setDebugMessage: (message: string) => void;
|
setDebugMessage: (message: string) => void;
|
||||||
toggleCorgiMode: () => void;
|
toggleCorgiMode: () => void;
|
||||||
@@ -138,13 +139,7 @@ export const useSlashCommandProcessor = (
|
|||||||
if (message.type === MessageType.ABOUT) {
|
if (message.type === MessageType.ABOUT) {
|
||||||
historyItemContent = {
|
historyItemContent = {
|
||||||
type: 'about',
|
type: 'about',
|
||||||
cliVersion: message.cliVersion,
|
systemInfo: message.systemInfo,
|
||||||
osVersion: message.osVersion,
|
|
||||||
sandboxEnv: message.sandboxEnv,
|
|
||||||
modelVersion: message.modelVersion,
|
|
||||||
selectedAuthType: message.selectedAuthType,
|
|
||||||
gcpProject: message.gcpProject,
|
|
||||||
ideClient: message.ideClient,
|
|
||||||
};
|
};
|
||||||
} else if (message.type === MessageType.HELP) {
|
} else if (message.type === MessageType.HELP) {
|
||||||
historyItemContent = {
|
historyItemContent = {
|
||||||
@@ -402,6 +397,9 @@ export const useSlashCommandProcessor = (
|
|||||||
case 'subagent_list':
|
case 'subagent_list':
|
||||||
actions.openAgentsManagerDialog();
|
actions.openAgentsManagerDialog();
|
||||||
return { type: 'handled' };
|
return { type: 'handled' };
|
||||||
|
case 'approval-mode':
|
||||||
|
actions.openApprovalModeDialog();
|
||||||
|
return { type: 'handled' };
|
||||||
case 'help':
|
case 'help':
|
||||||
return { type: 'handled' };
|
return { type: 'handled' };
|
||||||
default: {
|
default: {
|
||||||
|
|||||||
57
packages/cli/src/ui/hooks/useApprovalModeCommand.ts
Normal file
57
packages/cli/src/ui/hooks/useApprovalModeCommand.ts
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
/**
|
||||||
|
* @license
|
||||||
|
* Copyright 2025 Qwen
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { useState, useCallback } from 'react';
|
||||||
|
import type { ApprovalMode, Config } from '@qwen-code/qwen-code-core';
|
||||||
|
import type { LoadedSettings, SettingScope } from '../../config/settings.js';
|
||||||
|
|
||||||
|
interface UseApprovalModeCommandReturn {
|
||||||
|
isApprovalModeDialogOpen: boolean;
|
||||||
|
openApprovalModeDialog: () => void;
|
||||||
|
handleApprovalModeSelect: (
|
||||||
|
mode: ApprovalMode | undefined,
|
||||||
|
scope: SettingScope,
|
||||||
|
) => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
export const useApprovalModeCommand = (
|
||||||
|
loadedSettings: LoadedSettings,
|
||||||
|
config: Config,
|
||||||
|
): UseApprovalModeCommandReturn => {
|
||||||
|
const [isApprovalModeDialogOpen, setIsApprovalModeDialogOpen] =
|
||||||
|
useState(false);
|
||||||
|
|
||||||
|
const openApprovalModeDialog = useCallback(() => {
|
||||||
|
setIsApprovalModeDialogOpen(true);
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
const handleApprovalModeSelect = useCallback(
|
||||||
|
(mode: ApprovalMode | undefined, scope: SettingScope) => {
|
||||||
|
try {
|
||||||
|
if (!mode) {
|
||||||
|
// User cancelled the dialog
|
||||||
|
setIsApprovalModeDialogOpen(false);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set the mode in the current session and persist to settings
|
||||||
|
loadedSettings.setValue(scope, 'tools.approvalMode', mode);
|
||||||
|
config.setApprovalMode(
|
||||||
|
loadedSettings.merged.tools?.approvalMode ?? mode,
|
||||||
|
);
|
||||||
|
} finally {
|
||||||
|
setIsApprovalModeDialogOpen(false);
|
||||||
|
}
|
||||||
|
},
|
||||||
|
[config, loadedSettings],
|
||||||
|
);
|
||||||
|
|
||||||
|
return {
|
||||||
|
isApprovalModeDialogOpen,
|
||||||
|
openApprovalModeDialog,
|
||||||
|
handleApprovalModeSelect,
|
||||||
|
};
|
||||||
|
};
|
||||||
@@ -6,13 +6,20 @@
|
|||||||
|
|
||||||
import { useCallback } from 'react';
|
import { useCallback } from 'react';
|
||||||
import { SettingScope } from '../../config/settings.js';
|
import { SettingScope } from '../../config/settings.js';
|
||||||
import type { AuthType } from '@qwen-code/qwen-code-core';
|
import type { AuthType, ApprovalMode } from '@qwen-code/qwen-code-core';
|
||||||
|
|
||||||
export interface DialogCloseOptions {
|
export interface DialogCloseOptions {
|
||||||
// Theme dialog
|
// Theme dialog
|
||||||
isThemeDialogOpen: boolean;
|
isThemeDialogOpen: boolean;
|
||||||
handleThemeSelect: (theme: string | undefined, scope: SettingScope) => void;
|
handleThemeSelect: (theme: string | undefined, scope: SettingScope) => void;
|
||||||
|
|
||||||
|
// Approval mode dialog
|
||||||
|
isApprovalModeDialogOpen: boolean;
|
||||||
|
handleApprovalModeSelect: (
|
||||||
|
mode: ApprovalMode | undefined,
|
||||||
|
scope: SettingScope,
|
||||||
|
) => void;
|
||||||
|
|
||||||
// Auth dialog
|
// Auth dialog
|
||||||
isAuthDialogOpen: boolean;
|
isAuthDialogOpen: boolean;
|
||||||
handleAuthSelect: (
|
handleAuthSelect: (
|
||||||
@@ -57,6 +64,12 @@ export function useDialogClose(options: DialogCloseOptions) {
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (options.isApprovalModeDialogOpen) {
|
||||||
|
// Mimic ESC behavior: onSelect(undefined, selectedScope) - keeps current mode
|
||||||
|
options.handleApprovalModeSelect(undefined, SettingScope.User);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
if (options.isEditorDialogOpen) {
|
if (options.isEditorDialogOpen) {
|
||||||
// Mimic ESC behavior: call onExit() directly
|
// Mimic ESC behavior: call onExit() directly
|
||||||
options.exitEditorDialog();
|
options.exitEditorDialog();
|
||||||
|
|||||||
@@ -109,7 +109,7 @@ describe('useEditorSettings', () => {
|
|||||||
|
|
||||||
expect(mockLoadedSettings.setValue).toHaveBeenCalledWith(
|
expect(mockLoadedSettings.setValue).toHaveBeenCalledWith(
|
||||||
scope,
|
scope,
|
||||||
'preferredEditor',
|
'general.preferredEditor',
|
||||||
editorType,
|
editorType,
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -139,7 +139,7 @@ describe('useEditorSettings', () => {
|
|||||||
|
|
||||||
expect(mockLoadedSettings.setValue).toHaveBeenCalledWith(
|
expect(mockLoadedSettings.setValue).toHaveBeenCalledWith(
|
||||||
scope,
|
scope,
|
||||||
'preferredEditor',
|
'general.preferredEditor',
|
||||||
undefined,
|
undefined,
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -170,7 +170,7 @@ describe('useEditorSettings', () => {
|
|||||||
|
|
||||||
expect(mockLoadedSettings.setValue).toHaveBeenCalledWith(
|
expect(mockLoadedSettings.setValue).toHaveBeenCalledWith(
|
||||||
scope,
|
scope,
|
||||||
'preferredEditor',
|
'general.preferredEditor',
|
||||||
editorType,
|
editorType,
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -199,7 +199,7 @@ describe('useEditorSettings', () => {
|
|||||||
|
|
||||||
expect(mockLoadedSettings.setValue).toHaveBeenCalledWith(
|
expect(mockLoadedSettings.setValue).toHaveBeenCalledWith(
|
||||||
scope,
|
scope,
|
||||||
'preferredEditor',
|
'general.preferredEditor',
|
||||||
editorType,
|
editorType,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
|||||||
@@ -45,7 +45,7 @@ export const useEditorSettings = (
|
|||||||
}
|
}
|
||||||
|
|
||||||
try {
|
try {
|
||||||
loadedSettings.setValue(scope, 'preferredEditor', editorType);
|
loadedSettings.setValue(scope, 'general.preferredEditor', editorType);
|
||||||
addItem(
|
addItem(
|
||||||
{
|
{
|
||||||
type: MessageType.INFO,
|
type: MessageType.INFO,
|
||||||
|
|||||||
@@ -120,13 +120,22 @@ export type HistoryItemWarning = HistoryItemBase & {
|
|||||||
|
|
||||||
export type HistoryItemAbout = HistoryItemBase & {
|
export type HistoryItemAbout = HistoryItemBase & {
|
||||||
type: 'about';
|
type: 'about';
|
||||||
cliVersion: string;
|
systemInfo: {
|
||||||
osVersion: string;
|
cliVersion: string;
|
||||||
sandboxEnv: string;
|
osPlatform: string;
|
||||||
modelVersion: string;
|
osArch: string;
|
||||||
selectedAuthType: string;
|
osRelease: string;
|
||||||
gcpProject: string;
|
nodeVersion: string;
|
||||||
ideClient: string;
|
npmVersion: string;
|
||||||
|
sandboxEnv: string;
|
||||||
|
modelVersion: string;
|
||||||
|
selectedAuthType: string;
|
||||||
|
ideClient: string;
|
||||||
|
sessionId: string;
|
||||||
|
memoryUsage: string;
|
||||||
|
baseUrl?: string;
|
||||||
|
gitCommit?: string;
|
||||||
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
export type HistoryItemHelp = HistoryItemBase & {
|
export type HistoryItemHelp = HistoryItemBase & {
|
||||||
@@ -288,13 +297,22 @@ export type Message =
|
|||||||
| {
|
| {
|
||||||
type: MessageType.ABOUT;
|
type: MessageType.ABOUT;
|
||||||
timestamp: Date;
|
timestamp: Date;
|
||||||
cliVersion: string;
|
systemInfo: {
|
||||||
osVersion: string;
|
cliVersion: string;
|
||||||
sandboxEnv: string;
|
osPlatform: string;
|
||||||
modelVersion: string;
|
osArch: string;
|
||||||
selectedAuthType: string;
|
osRelease: string;
|
||||||
gcpProject: string;
|
nodeVersion: string;
|
||||||
ideClient: string;
|
npmVersion: string;
|
||||||
|
sandboxEnv: string;
|
||||||
|
modelVersion: string;
|
||||||
|
selectedAuthType: string;
|
||||||
|
ideClient: string;
|
||||||
|
sessionId: string;
|
||||||
|
memoryUsage: string;
|
||||||
|
baseUrl?: string;
|
||||||
|
gitCommit?: string;
|
||||||
|
};
|
||||||
content?: string; // Optional content, not really used for ABOUT
|
content?: string; // Optional content, not really used for ABOUT
|
||||||
}
|
}
|
||||||
| {
|
| {
|
||||||
|
|||||||
@@ -14,7 +14,11 @@ import { settingExistsInScope } from './settingsUtils.js';
|
|||||||
export const SCOPE_LABELS = {
|
export const SCOPE_LABELS = {
|
||||||
[SettingScope.User]: 'User Settings',
|
[SettingScope.User]: 'User Settings',
|
||||||
[SettingScope.Workspace]: 'Workspace Settings',
|
[SettingScope.Workspace]: 'Workspace Settings',
|
||||||
[SettingScope.System]: 'System Settings',
|
|
||||||
|
// TODO: migrate system settings to user settings
|
||||||
|
// we don't want to save settings to system scope, it is a troublemaker
|
||||||
|
// comment it out for now.
|
||||||
|
// [SettingScope.System]: 'System Settings',
|
||||||
} as const;
|
} as const;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -27,7 +31,7 @@ export function getScopeItems() {
|
|||||||
label: SCOPE_LABELS[SettingScope.Workspace],
|
label: SCOPE_LABELS[SettingScope.Workspace],
|
||||||
value: SettingScope.Workspace,
|
value: SettingScope.Workspace,
|
||||||
},
|
},
|
||||||
{ label: SCOPE_LABELS[SettingScope.System], value: SettingScope.System },
|
// { label: SCOPE_LABELS[SettingScope.System], value: SettingScope.System },
|
||||||
];
|
];
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
331
packages/cli/src/utils/systemInfo.test.ts
Normal file
331
packages/cli/src/utils/systemInfo.test.ts
Normal file
@@ -0,0 +1,331 @@
|
|||||||
|
/**
|
||||||
|
* @license
|
||||||
|
* Copyright 2025 Qwen
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect, beforeEach, vi, afterEach } from 'vitest';
|
||||||
|
import {
|
||||||
|
getSystemInfo,
|
||||||
|
getExtendedSystemInfo,
|
||||||
|
getNpmVersion,
|
||||||
|
getSandboxEnv,
|
||||||
|
getIdeClientName,
|
||||||
|
} from './systemInfo.js';
|
||||||
|
import type { CommandContext } from '../ui/commands/types.js';
|
||||||
|
import { createMockCommandContext } from '../test-utils/mockCommandContext.js';
|
||||||
|
import * as child_process from 'node:child_process';
|
||||||
|
import os from 'node:os';
|
||||||
|
import { IdeClient } from '@qwen-code/qwen-code-core';
|
||||||
|
import * as versionUtils from './version.js';
|
||||||
|
import type { ExecSyncOptions } from 'node:child_process';
|
||||||
|
|
||||||
|
vi.mock('node:child_process');
|
||||||
|
|
||||||
|
vi.mock('node:os', () => ({
|
||||||
|
default: {
|
||||||
|
release: vi.fn(),
|
||||||
|
},
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('./version.js', () => ({
|
||||||
|
getCliVersion: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
|
vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => {
|
||||||
|
const actual =
|
||||||
|
await importOriginal<typeof import('@qwen-code/qwen-code-core')>();
|
||||||
|
return {
|
||||||
|
...actual,
|
||||||
|
IdeClient: {
|
||||||
|
getInstance: vi.fn(),
|
||||||
|
},
|
||||||
|
};
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('systemInfo', () => {
|
||||||
|
let mockContext: CommandContext;
|
||||||
|
const originalPlatform = process.platform;
|
||||||
|
const originalArch = process.arch;
|
||||||
|
const originalVersion = process.version;
|
||||||
|
const originalEnv = { ...process.env };
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
mockContext = createMockCommandContext({
|
||||||
|
services: {
|
||||||
|
config: {
|
||||||
|
getModel: vi.fn().mockReturnValue('test-model'),
|
||||||
|
getIdeMode: vi.fn().mockReturnValue(true),
|
||||||
|
getSessionId: vi.fn().mockReturnValue('test-session-id'),
|
||||||
|
getContentGeneratorConfig: vi.fn().mockReturnValue({
|
||||||
|
baseUrl: 'https://api.openai.com',
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
settings: {
|
||||||
|
merged: {
|
||||||
|
security: {
|
||||||
|
auth: {
|
||||||
|
selectedType: 'test-auth',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
} as unknown as CommandContext);
|
||||||
|
|
||||||
|
vi.mocked(versionUtils.getCliVersion).mockResolvedValue('test-version');
|
||||||
|
vi.mocked(child_process.execSync).mockImplementation(
|
||||||
|
(command: string, options?: ExecSyncOptions) => {
|
||||||
|
if (
|
||||||
|
options &&
|
||||||
|
typeof options === 'object' &&
|
||||||
|
'encoding' in options &&
|
||||||
|
options.encoding === 'utf-8'
|
||||||
|
) {
|
||||||
|
return '10.0.0';
|
||||||
|
}
|
||||||
|
return Buffer.from('10.0.0', 'utf-8');
|
||||||
|
},
|
||||||
|
);
|
||||||
|
vi.mocked(os.release).mockReturnValue('22.0.0');
|
||||||
|
process.env['GOOGLE_CLOUD_PROJECT'] = 'test-gcp-project';
|
||||||
|
Object.defineProperty(process, 'platform', {
|
||||||
|
value: 'test-os',
|
||||||
|
});
|
||||||
|
Object.defineProperty(process, 'arch', {
|
||||||
|
value: 'x64',
|
||||||
|
});
|
||||||
|
Object.defineProperty(process, 'version', {
|
||||||
|
value: 'v20.0.0',
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
vi.unstubAllEnvs();
|
||||||
|
Object.defineProperty(process, 'platform', {
|
||||||
|
value: originalPlatform,
|
||||||
|
});
|
||||||
|
Object.defineProperty(process, 'arch', {
|
||||||
|
value: originalArch,
|
||||||
|
});
|
||||||
|
Object.defineProperty(process, 'version', {
|
||||||
|
value: originalVersion,
|
||||||
|
});
|
||||||
|
process.env = originalEnv;
|
||||||
|
vi.clearAllMocks();
|
||||||
|
vi.resetAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getNpmVersion', () => {
|
||||||
|
it('should return npm version when available', async () => {
|
||||||
|
vi.mocked(child_process.execSync).mockImplementation(
|
||||||
|
(command: string, options?: ExecSyncOptions) => {
|
||||||
|
if (
|
||||||
|
options &&
|
||||||
|
typeof options === 'object' &&
|
||||||
|
'encoding' in options &&
|
||||||
|
options.encoding === 'utf-8'
|
||||||
|
) {
|
||||||
|
return '10.0.0';
|
||||||
|
}
|
||||||
|
return Buffer.from('10.0.0', 'utf-8');
|
||||||
|
},
|
||||||
|
);
|
||||||
|
const version = await getNpmVersion();
|
||||||
|
expect(version).toBe('10.0.0');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return unknown when npm command fails', async () => {
|
||||||
|
vi.mocked(child_process.execSync).mockImplementation(() => {
|
||||||
|
throw new Error('npm not found');
|
||||||
|
});
|
||||||
|
const version = await getNpmVersion();
|
||||||
|
expect(version).toBe('unknown');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getSandboxEnv', () => {
|
||||||
|
it('should return "no sandbox" when SANDBOX is not set', () => {
|
||||||
|
delete process.env['SANDBOX'];
|
||||||
|
expect(getSandboxEnv()).toBe('no sandbox');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return sandbox-exec info when SANDBOX is sandbox-exec', () => {
|
||||||
|
process.env['SANDBOX'] = 'sandbox-exec';
|
||||||
|
process.env['SEATBELT_PROFILE'] = 'test-profile';
|
||||||
|
expect(getSandboxEnv()).toBe('sandbox-exec (test-profile)');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return sandbox name without prefix when stripPrefix is true', () => {
|
||||||
|
process.env['SANDBOX'] = 'qwen-code-test-sandbox';
|
||||||
|
expect(getSandboxEnv(true)).toBe('test-sandbox');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return sandbox name with prefix when stripPrefix is false', () => {
|
||||||
|
process.env['SANDBOX'] = 'qwen-code-test-sandbox';
|
||||||
|
expect(getSandboxEnv(false)).toBe('qwen-code-test-sandbox');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle qwen- prefix removal', () => {
|
||||||
|
process.env['SANDBOX'] = 'qwen-custom-sandbox';
|
||||||
|
expect(getSandboxEnv(true)).toBe('custom-sandbox');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getIdeClientName', () => {
|
||||||
|
it('should return IDE client name when IDE mode is enabled', async () => {
|
||||||
|
vi.mocked(IdeClient.getInstance).mockResolvedValue({
|
||||||
|
getDetectedIdeDisplayName: vi.fn().mockReturnValue('test-ide'),
|
||||||
|
} as unknown as IdeClient);
|
||||||
|
|
||||||
|
const ideClient = await getIdeClientName(mockContext);
|
||||||
|
expect(ideClient).toBe('test-ide');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return empty string when IDE mode is disabled', async () => {
|
||||||
|
vi.mocked(mockContext.services.config!.getIdeMode).mockReturnValue(false);
|
||||||
|
|
||||||
|
const ideClient = await getIdeClientName(mockContext);
|
||||||
|
expect(ideClient).toBe('');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return empty string when IDE client detection fails', async () => {
|
||||||
|
vi.mocked(IdeClient.getInstance).mockRejectedValue(
|
||||||
|
new Error('IDE client error'),
|
||||||
|
);
|
||||||
|
|
||||||
|
const ideClient = await getIdeClientName(mockContext);
|
||||||
|
expect(ideClient).toBe('');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getSystemInfo', () => {
|
||||||
|
it('should collect all system information', async () => {
|
||||||
|
// Ensure SANDBOX is not set for this test
|
||||||
|
delete process.env['SANDBOX'];
|
||||||
|
vi.mocked(IdeClient.getInstance).mockResolvedValue({
|
||||||
|
getDetectedIdeDisplayName: vi.fn().mockReturnValue('test-ide'),
|
||||||
|
} as unknown as IdeClient);
|
||||||
|
vi.mocked(child_process.execSync).mockImplementation(
|
||||||
|
(command: string, options?: ExecSyncOptions) => {
|
||||||
|
if (
|
||||||
|
options &&
|
||||||
|
typeof options === 'object' &&
|
||||||
|
'encoding' in options &&
|
||||||
|
options.encoding === 'utf-8'
|
||||||
|
) {
|
||||||
|
return '10.0.0';
|
||||||
|
}
|
||||||
|
return Buffer.from('10.0.0', 'utf-8');
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
const systemInfo = await getSystemInfo(mockContext);
|
||||||
|
|
||||||
|
expect(systemInfo).toEqual({
|
||||||
|
cliVersion: 'test-version',
|
||||||
|
osPlatform: 'test-os',
|
||||||
|
osArch: 'x64',
|
||||||
|
osRelease: '22.0.0',
|
||||||
|
nodeVersion: 'v20.0.0',
|
||||||
|
npmVersion: '10.0.0',
|
||||||
|
sandboxEnv: 'no sandbox',
|
||||||
|
modelVersion: 'test-model',
|
||||||
|
selectedAuthType: 'test-auth',
|
||||||
|
ideClient: 'test-ide',
|
||||||
|
sessionId: 'test-session-id',
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle missing config gracefully', async () => {
|
||||||
|
mockContext.services.config = null;
|
||||||
|
vi.mocked(IdeClient.getInstance).mockResolvedValue({
|
||||||
|
getDetectedIdeDisplayName: vi.fn().mockReturnValue(''),
|
||||||
|
} as unknown as IdeClient);
|
||||||
|
|
||||||
|
const systemInfo = await getSystemInfo(mockContext);
|
||||||
|
|
||||||
|
expect(systemInfo.modelVersion).toBe('Unknown');
|
||||||
|
expect(systemInfo.sessionId).toBe('unknown');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('getExtendedSystemInfo', () => {
|
||||||
|
it('should include memory usage and base URL', async () => {
|
||||||
|
vi.mocked(IdeClient.getInstance).mockResolvedValue({
|
||||||
|
getDetectedIdeDisplayName: vi.fn().mockReturnValue('test-ide'),
|
||||||
|
} as unknown as IdeClient);
|
||||||
|
vi.mocked(child_process.execSync).mockImplementation(
|
||||||
|
(command: string, options?: ExecSyncOptions) => {
|
||||||
|
if (
|
||||||
|
options &&
|
||||||
|
typeof options === 'object' &&
|
||||||
|
'encoding' in options &&
|
||||||
|
options.encoding === 'utf-8'
|
||||||
|
) {
|
||||||
|
return '10.0.0';
|
||||||
|
}
|
||||||
|
return Buffer.from('10.0.0', 'utf-8');
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
const { AuthType } = await import('@qwen-code/qwen-code-core');
|
||||||
|
// Update the mock context to use OpenAI auth
|
||||||
|
mockContext.services.settings.merged.security!.auth!.selectedType =
|
||||||
|
AuthType.USE_OPENAI;
|
||||||
|
|
||||||
|
const extendedInfo = await getExtendedSystemInfo(mockContext);
|
||||||
|
|
||||||
|
expect(extendedInfo.memoryUsage).toBeDefined();
|
||||||
|
expect(extendedInfo.memoryUsage).toMatch(/\d+\.\d+ (KB|MB|GB)/);
|
||||||
|
expect(extendedInfo.baseUrl).toBe('https://api.openai.com');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should use sandbox env without prefix for bug reports', async () => {
|
||||||
|
process.env['SANDBOX'] = 'qwen-code-test-sandbox';
|
||||||
|
vi.mocked(IdeClient.getInstance).mockResolvedValue({
|
||||||
|
getDetectedIdeDisplayName: vi.fn().mockReturnValue(''),
|
||||||
|
} as unknown as IdeClient);
|
||||||
|
vi.mocked(child_process.execSync).mockImplementation(
|
||||||
|
(command: string, options?: ExecSyncOptions) => {
|
||||||
|
if (
|
||||||
|
options &&
|
||||||
|
typeof options === 'object' &&
|
||||||
|
'encoding' in options &&
|
||||||
|
options.encoding === 'utf-8'
|
||||||
|
) {
|
||||||
|
return '10.0.0';
|
||||||
|
}
|
||||||
|
return Buffer.from('10.0.0', 'utf-8');
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
const extendedInfo = await getExtendedSystemInfo(mockContext);
|
||||||
|
|
||||||
|
expect(extendedInfo.sandboxEnv).toBe('test-sandbox');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not include base URL for non-OpenAI auth', async () => {
|
||||||
|
vi.mocked(IdeClient.getInstance).mockResolvedValue({
|
||||||
|
getDetectedIdeDisplayName: vi.fn().mockReturnValue(''),
|
||||||
|
} as unknown as IdeClient);
|
||||||
|
vi.mocked(child_process.execSync).mockImplementation(
|
||||||
|
(command: string, options?: ExecSyncOptions) => {
|
||||||
|
if (
|
||||||
|
options &&
|
||||||
|
typeof options === 'object' &&
|
||||||
|
'encoding' in options &&
|
||||||
|
options.encoding === 'utf-8'
|
||||||
|
) {
|
||||||
|
return '10.0.0';
|
||||||
|
}
|
||||||
|
return Buffer.from('10.0.0', 'utf-8');
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
const extendedInfo = await getExtendedSystemInfo(mockContext);
|
||||||
|
|
||||||
|
expect(extendedInfo.baseUrl).toBeUndefined();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
173
packages/cli/src/utils/systemInfo.ts
Normal file
173
packages/cli/src/utils/systemInfo.ts
Normal file
@@ -0,0 +1,173 @@
|
|||||||
|
/**
|
||||||
|
* @license
|
||||||
|
* Copyright 2025 Qwen
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*/
|
||||||
|
|
||||||
|
import process from 'node:process';
|
||||||
|
import os from 'node:os';
|
||||||
|
import { execSync } from 'node:child_process';
|
||||||
|
import type { CommandContext } from '../ui/commands/types.js';
|
||||||
|
import { getCliVersion } from './version.js';
|
||||||
|
import { IdeClient, AuthType } from '@qwen-code/qwen-code-core';
|
||||||
|
import { formatMemoryUsage } from '../ui/utils/formatters.js';
|
||||||
|
import { GIT_COMMIT_INFO } from '../generated/git-commit.js';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* System information interface containing all system-related details
|
||||||
|
* that can be collected for debugging and reporting purposes.
|
||||||
|
*/
|
||||||
|
export interface SystemInfo {
|
||||||
|
cliVersion: string;
|
||||||
|
osPlatform: string;
|
||||||
|
osArch: string;
|
||||||
|
osRelease: string;
|
||||||
|
nodeVersion: string;
|
||||||
|
npmVersion: string;
|
||||||
|
sandboxEnv: string;
|
||||||
|
modelVersion: string;
|
||||||
|
selectedAuthType: string;
|
||||||
|
ideClient: string;
|
||||||
|
sessionId: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Additional system information for bug reports
|
||||||
|
*/
|
||||||
|
export interface ExtendedSystemInfo extends SystemInfo {
|
||||||
|
memoryUsage: string;
|
||||||
|
baseUrl?: string;
|
||||||
|
gitCommit?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Gets the NPM version, handling cases where npm might not be available.
|
||||||
|
* Returns 'unknown' if npm command fails or is not found.
|
||||||
|
*/
|
||||||
|
export async function getNpmVersion(): Promise<string> {
|
||||||
|
try {
|
||||||
|
return execSync('npm --version', { encoding: 'utf-8' }).trim();
|
||||||
|
} catch {
|
||||||
|
return 'unknown';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Gets the IDE client name if IDE mode is enabled.
|
||||||
|
* Returns empty string if IDE mode is disabled or IDE client is not detected.
|
||||||
|
*/
|
||||||
|
export async function getIdeClientName(
|
||||||
|
context: CommandContext,
|
||||||
|
): Promise<string> {
|
||||||
|
if (!context.services.config?.getIdeMode()) {
|
||||||
|
return '';
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
const ideClient = await IdeClient.getInstance();
|
||||||
|
return ideClient?.getDetectedIdeDisplayName() ?? '';
|
||||||
|
} catch {
|
||||||
|
return '';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Gets the sandbox environment information.
|
||||||
|
* Handles different sandbox types including sandbox-exec and custom sandbox environments.
|
||||||
|
* For bug reports, removes 'qwen-' or 'qwen-code-' prefixes from sandbox names.
|
||||||
|
*
|
||||||
|
* @param stripPrefix - Whether to strip 'qwen-' prefix (used for bug reports)
|
||||||
|
*/
|
||||||
|
export function getSandboxEnv(stripPrefix = false): string {
|
||||||
|
const sandbox = process.env['SANDBOX'];
|
||||||
|
|
||||||
|
if (!sandbox || sandbox === 'sandbox-exec') {
|
||||||
|
if (sandbox === 'sandbox-exec') {
|
||||||
|
const profile = process.env['SEATBELT_PROFILE'] || 'unknown';
|
||||||
|
return `sandbox-exec (${profile})`;
|
||||||
|
}
|
||||||
|
return 'no sandbox';
|
||||||
|
}
|
||||||
|
|
||||||
|
// For bug reports, remove qwen- prefix
|
||||||
|
if (stripPrefix) {
|
||||||
|
return sandbox.replace(/^qwen-(?:code-)?/, '');
|
||||||
|
}
|
||||||
|
|
||||||
|
return sandbox;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Collects comprehensive system information for debugging and reporting.
|
||||||
|
* This function gathers all system-related details including OS, versions,
|
||||||
|
* sandbox environment, authentication, and session information.
|
||||||
|
*
|
||||||
|
* @param context - Command context containing config and settings
|
||||||
|
* @returns Promise resolving to SystemInfo object with all collected information
|
||||||
|
*/
|
||||||
|
export async function getSystemInfo(
|
||||||
|
context: CommandContext,
|
||||||
|
): Promise<SystemInfo> {
|
||||||
|
const osPlatform = process.platform;
|
||||||
|
const osArch = process.arch;
|
||||||
|
const osRelease = os.release();
|
||||||
|
const nodeVersion = process.version;
|
||||||
|
const npmVersion = await getNpmVersion();
|
||||||
|
const sandboxEnv = getSandboxEnv();
|
||||||
|
const modelVersion = context.services.config?.getModel() || 'Unknown';
|
||||||
|
const cliVersion = await getCliVersion();
|
||||||
|
const selectedAuthType =
|
||||||
|
context.services.settings.merged.security?.auth?.selectedType || '';
|
||||||
|
const ideClient = await getIdeClientName(context);
|
||||||
|
const sessionId = context.services.config?.getSessionId() || 'unknown';
|
||||||
|
|
||||||
|
return {
|
||||||
|
cliVersion,
|
||||||
|
osPlatform,
|
||||||
|
osArch,
|
||||||
|
osRelease,
|
||||||
|
nodeVersion,
|
||||||
|
npmVersion,
|
||||||
|
sandboxEnv,
|
||||||
|
modelVersion,
|
||||||
|
selectedAuthType,
|
||||||
|
ideClient,
|
||||||
|
sessionId,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Collects extended system information for bug reports.
|
||||||
|
* Includes all standard system info plus memory usage and optional base URL.
|
||||||
|
*
|
||||||
|
* @param context - Command context containing config and settings
|
||||||
|
* @returns Promise resolving to ExtendedSystemInfo object
|
||||||
|
*/
|
||||||
|
export async function getExtendedSystemInfo(
|
||||||
|
context: CommandContext,
|
||||||
|
): Promise<ExtendedSystemInfo> {
|
||||||
|
const baseInfo = await getSystemInfo(context);
|
||||||
|
const memoryUsage = formatMemoryUsage(process.memoryUsage().rss);
|
||||||
|
|
||||||
|
// For bug reports, use sandbox name without prefix
|
||||||
|
const sandboxEnv = getSandboxEnv(true);
|
||||||
|
|
||||||
|
// Get base URL if using OpenAI auth
|
||||||
|
const baseUrl =
|
||||||
|
baseInfo.selectedAuthType === AuthType.USE_OPENAI
|
||||||
|
? context.services.config?.getContentGeneratorConfig()?.baseUrl
|
||||||
|
: undefined;
|
||||||
|
|
||||||
|
// Get git commit info
|
||||||
|
const gitCommit =
|
||||||
|
GIT_COMMIT_INFO && !['N/A'].includes(GIT_COMMIT_INFO)
|
||||||
|
? GIT_COMMIT_INFO
|
||||||
|
: undefined;
|
||||||
|
|
||||||
|
return {
|
||||||
|
...baseInfo,
|
||||||
|
sandboxEnv,
|
||||||
|
memoryUsage,
|
||||||
|
baseUrl,
|
||||||
|
gitCommit,
|
||||||
|
};
|
||||||
|
}
|
||||||
117
packages/cli/src/utils/systemInfoFields.ts
Normal file
117
packages/cli/src/utils/systemInfoFields.ts
Normal file
@@ -0,0 +1,117 @@
|
|||||||
|
/**
|
||||||
|
* @license
|
||||||
|
* Copyright 2025 Qwen
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*/
|
||||||
|
|
||||||
|
import type { ExtendedSystemInfo } from './systemInfo.js';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Field configuration for system information display
|
||||||
|
*/
|
||||||
|
export interface SystemInfoField {
|
||||||
|
label: string;
|
||||||
|
key: keyof ExtendedSystemInfo;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Unified field configuration for system information display.
|
||||||
|
* This ensures consistent labeling between /about and /bug commands.
|
||||||
|
*/
|
||||||
|
export function getSystemInfoFields(
|
||||||
|
info: ExtendedSystemInfo,
|
||||||
|
): SystemInfoField[] {
|
||||||
|
const allFields: SystemInfoField[] = [
|
||||||
|
{
|
||||||
|
label: 'CLI Version',
|
||||||
|
key: 'cliVersion',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: 'Git Commit',
|
||||||
|
key: 'gitCommit',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: 'Model',
|
||||||
|
key: 'modelVersion',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: 'Sandbox',
|
||||||
|
key: 'sandboxEnv',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: 'OS Platform',
|
||||||
|
key: 'osPlatform',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: 'OS Arch',
|
||||||
|
key: 'osArch',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: 'OS Release',
|
||||||
|
key: 'osRelease',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: 'Node.js Version',
|
||||||
|
key: 'nodeVersion',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: 'NPM Version',
|
||||||
|
key: 'npmVersion',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: 'Session ID',
|
||||||
|
key: 'sessionId',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: 'Auth Method',
|
||||||
|
key: 'selectedAuthType',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: 'Base URL',
|
||||||
|
key: 'baseUrl',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: 'Memory Usage',
|
||||||
|
key: 'memoryUsage',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: 'IDE Client',
|
||||||
|
key: 'ideClient',
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
// Filter out optional fields that are not present
|
||||||
|
return allFields.filter((field) => {
|
||||||
|
const value = info[field.key];
|
||||||
|
// Optional fields: only show if they exist and are non-empty
|
||||||
|
if (
|
||||||
|
field.key === 'baseUrl' ||
|
||||||
|
field.key === 'gitCommit' ||
|
||||||
|
field.key === 'ideClient'
|
||||||
|
) {
|
||||||
|
return Boolean(value);
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get the value for a field from system info
|
||||||
|
*/
|
||||||
|
export function getFieldValue(
|
||||||
|
field: SystemInfoField,
|
||||||
|
info: ExtendedSystemInfo,
|
||||||
|
): string {
|
||||||
|
const value = info[field.key];
|
||||||
|
|
||||||
|
if (value === undefined || value === null) {
|
||||||
|
return '';
|
||||||
|
}
|
||||||
|
|
||||||
|
// Special formatting for selectedAuthType
|
||||||
|
if (field.key === 'selectedAuthType') {
|
||||||
|
return String(value).startsWith('oauth') ? 'OAuth' : String(value);
|
||||||
|
}
|
||||||
|
|
||||||
|
return String(value);
|
||||||
|
}
|
||||||
@@ -22,12 +22,22 @@ vi.mock('os', async (importOriginal) => {
|
|||||||
describe('getUserStartupWarnings', () => {
|
describe('getUserStartupWarnings', () => {
|
||||||
let testRootDir: string;
|
let testRootDir: string;
|
||||||
let homeDir: string;
|
let homeDir: string;
|
||||||
|
let startupOptions: {
|
||||||
|
workspaceRoot: string;
|
||||||
|
useRipgrep: boolean;
|
||||||
|
useBuiltinRipgrep: boolean;
|
||||||
|
};
|
||||||
|
|
||||||
beforeEach(async () => {
|
beforeEach(async () => {
|
||||||
testRootDir = await fs.mkdtemp(path.join(os.tmpdir(), 'warnings-test-'));
|
testRootDir = await fs.mkdtemp(path.join(os.tmpdir(), 'warnings-test-'));
|
||||||
homeDir = path.join(testRootDir, 'home');
|
homeDir = path.join(testRootDir, 'home');
|
||||||
await fs.mkdir(homeDir, { recursive: true });
|
await fs.mkdir(homeDir, { recursive: true });
|
||||||
vi.mocked(os.homedir).mockReturnValue(homeDir);
|
vi.mocked(os.homedir).mockReturnValue(homeDir);
|
||||||
|
startupOptions = {
|
||||||
|
workspaceRoot: testRootDir,
|
||||||
|
useRipgrep: true,
|
||||||
|
useBuiltinRipgrep: true,
|
||||||
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
afterEach(async () => {
|
afterEach(async () => {
|
||||||
@@ -37,7 +47,10 @@ describe('getUserStartupWarnings', () => {
|
|||||||
|
|
||||||
describe('home directory check', () => {
|
describe('home directory check', () => {
|
||||||
it('should return a warning when running in home directory', async () => {
|
it('should return a warning when running in home directory', async () => {
|
||||||
const warnings = await getUserStartupWarnings(homeDir);
|
const warnings = await getUserStartupWarnings({
|
||||||
|
...startupOptions,
|
||||||
|
workspaceRoot: homeDir,
|
||||||
|
});
|
||||||
expect(warnings).toContainEqual(
|
expect(warnings).toContainEqual(
|
||||||
expect.stringContaining('home directory'),
|
expect.stringContaining('home directory'),
|
||||||
);
|
);
|
||||||
@@ -46,7 +59,10 @@ describe('getUserStartupWarnings', () => {
|
|||||||
it('should not return a warning when running in a project directory', async () => {
|
it('should not return a warning when running in a project directory', async () => {
|
||||||
const projectDir = path.join(testRootDir, 'project');
|
const projectDir = path.join(testRootDir, 'project');
|
||||||
await fs.mkdir(projectDir);
|
await fs.mkdir(projectDir);
|
||||||
const warnings = await getUserStartupWarnings(projectDir);
|
const warnings = await getUserStartupWarnings({
|
||||||
|
...startupOptions,
|
||||||
|
workspaceRoot: projectDir,
|
||||||
|
});
|
||||||
expect(warnings).not.toContainEqual(
|
expect(warnings).not.toContainEqual(
|
||||||
expect.stringContaining('home directory'),
|
expect.stringContaining('home directory'),
|
||||||
);
|
);
|
||||||
@@ -56,7 +72,10 @@ describe('getUserStartupWarnings', () => {
|
|||||||
describe('root directory check', () => {
|
describe('root directory check', () => {
|
||||||
it('should return a warning when running in a root directory', async () => {
|
it('should return a warning when running in a root directory', async () => {
|
||||||
const rootDir = path.parse(testRootDir).root;
|
const rootDir = path.parse(testRootDir).root;
|
||||||
const warnings = await getUserStartupWarnings(rootDir);
|
const warnings = await getUserStartupWarnings({
|
||||||
|
...startupOptions,
|
||||||
|
workspaceRoot: rootDir,
|
||||||
|
});
|
||||||
expect(warnings).toContainEqual(
|
expect(warnings).toContainEqual(
|
||||||
expect.stringContaining('root directory'),
|
expect.stringContaining('root directory'),
|
||||||
);
|
);
|
||||||
@@ -68,7 +87,10 @@ describe('getUserStartupWarnings', () => {
|
|||||||
it('should not return a warning when running in a non-root directory', async () => {
|
it('should not return a warning when running in a non-root directory', async () => {
|
||||||
const projectDir = path.join(testRootDir, 'project');
|
const projectDir = path.join(testRootDir, 'project');
|
||||||
await fs.mkdir(projectDir);
|
await fs.mkdir(projectDir);
|
||||||
const warnings = await getUserStartupWarnings(projectDir);
|
const warnings = await getUserStartupWarnings({
|
||||||
|
...startupOptions,
|
||||||
|
workspaceRoot: projectDir,
|
||||||
|
});
|
||||||
expect(warnings).not.toContainEqual(
|
expect(warnings).not.toContainEqual(
|
||||||
expect.stringContaining('root directory'),
|
expect.stringContaining('root directory'),
|
||||||
);
|
);
|
||||||
@@ -78,7 +100,10 @@ describe('getUserStartupWarnings', () => {
|
|||||||
describe('error handling', () => {
|
describe('error handling', () => {
|
||||||
it('should handle errors when checking directory', async () => {
|
it('should handle errors when checking directory', async () => {
|
||||||
const nonExistentPath = path.join(testRootDir, 'non-existent');
|
const nonExistentPath = path.join(testRootDir, 'non-existent');
|
||||||
const warnings = await getUserStartupWarnings(nonExistentPath);
|
const warnings = await getUserStartupWarnings({
|
||||||
|
...startupOptions,
|
||||||
|
workspaceRoot: nonExistentPath,
|
||||||
|
});
|
||||||
const expectedWarning =
|
const expectedWarning =
|
||||||
'Could not verify the current directory due to a file system error.';
|
'Could not verify the current directory due to a file system error.';
|
||||||
expect(warnings).toEqual([expectedWarning, expectedWarning]);
|
expect(warnings).toEqual([expectedWarning, expectedWarning]);
|
||||||
|
|||||||
@@ -7,19 +7,26 @@
|
|||||||
import fs from 'node:fs/promises';
|
import fs from 'node:fs/promises';
|
||||||
import * as os from 'node:os';
|
import * as os from 'node:os';
|
||||||
import path from 'node:path';
|
import path from 'node:path';
|
||||||
|
import { canUseRipgrep } from '@qwen-code/qwen-code-core';
|
||||||
|
|
||||||
|
type WarningCheckOptions = {
|
||||||
|
workspaceRoot: string;
|
||||||
|
useRipgrep: boolean;
|
||||||
|
useBuiltinRipgrep: boolean;
|
||||||
|
};
|
||||||
|
|
||||||
type WarningCheck = {
|
type WarningCheck = {
|
||||||
id: string;
|
id: string;
|
||||||
check: (workspaceRoot: string) => Promise<string | null>;
|
check: (options: WarningCheckOptions) => Promise<string | null>;
|
||||||
};
|
};
|
||||||
|
|
||||||
// Individual warning checks
|
// Individual warning checks
|
||||||
const homeDirectoryCheck: WarningCheck = {
|
const homeDirectoryCheck: WarningCheck = {
|
||||||
id: 'home-directory',
|
id: 'home-directory',
|
||||||
check: async (workspaceRoot: string) => {
|
check: async (options: WarningCheckOptions) => {
|
||||||
try {
|
try {
|
||||||
const [workspaceRealPath, homeRealPath] = await Promise.all([
|
const [workspaceRealPath, homeRealPath] = await Promise.all([
|
||||||
fs.realpath(workspaceRoot),
|
fs.realpath(options.workspaceRoot),
|
||||||
fs.realpath(os.homedir()),
|
fs.realpath(os.homedir()),
|
||||||
]);
|
]);
|
||||||
|
|
||||||
@@ -35,9 +42,9 @@ const homeDirectoryCheck: WarningCheck = {
|
|||||||
|
|
||||||
const rootDirectoryCheck: WarningCheck = {
|
const rootDirectoryCheck: WarningCheck = {
|
||||||
id: 'root-directory',
|
id: 'root-directory',
|
||||||
check: async (workspaceRoot: string) => {
|
check: async (options: WarningCheckOptions) => {
|
||||||
try {
|
try {
|
||||||
const workspaceRealPath = await fs.realpath(workspaceRoot);
|
const workspaceRealPath = await fs.realpath(options.workspaceRoot);
|
||||||
const errorMessage =
|
const errorMessage =
|
||||||
'Warning: You are running Qwen Code in the root directory. Your entire folder structure will be used for context. It is strongly recommended to run in a project-specific directory.';
|
'Warning: You are running Qwen Code in the root directory. Your entire folder structure will be used for context. It is strongly recommended to run in a project-specific directory.';
|
||||||
|
|
||||||
@@ -53,17 +60,33 @@ const rootDirectoryCheck: WarningCheck = {
|
|||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
|
const ripgrepAvailabilityCheck: WarningCheck = {
|
||||||
|
id: 'ripgrep-availability',
|
||||||
|
check: async (options: WarningCheckOptions) => {
|
||||||
|
if (!options.useRipgrep) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
const isAvailable = await canUseRipgrep(options.useBuiltinRipgrep);
|
||||||
|
if (!isAvailable) {
|
||||||
|
return 'Ripgrep not available: Please install ripgrep globally to enable faster file content search. Falling back to built-in grep.';
|
||||||
|
}
|
||||||
|
return null;
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
// All warning checks
|
// All warning checks
|
||||||
const WARNING_CHECKS: readonly WarningCheck[] = [
|
const WARNING_CHECKS: readonly WarningCheck[] = [
|
||||||
homeDirectoryCheck,
|
homeDirectoryCheck,
|
||||||
rootDirectoryCheck,
|
rootDirectoryCheck,
|
||||||
|
ripgrepAvailabilityCheck,
|
||||||
];
|
];
|
||||||
|
|
||||||
export async function getUserStartupWarnings(
|
export async function getUserStartupWarnings(
|
||||||
workspaceRoot: string = process.cwd(),
|
options: WarningCheckOptions,
|
||||||
): Promise<string[]> {
|
): Promise<string[]> {
|
||||||
const results = await Promise.all(
|
const results = await Promise.all(
|
||||||
WARNING_CHECKS.map((check) => check.check(workspaceRoot)),
|
WARNING_CHECKS.map((check) => check.check(options)),
|
||||||
);
|
);
|
||||||
return results.filter((msg) => msg !== null);
|
return results.filter((msg) => msg !== null);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -105,34 +105,6 @@ describe('validateNonInterActiveAuth', () => {
|
|||||||
expect(processExitSpy).toHaveBeenCalledWith(1);
|
expect(processExitSpy).toHaveBeenCalledWith(1);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('uses LOGIN_WITH_GOOGLE if GOOGLE_GENAI_USE_GCA is set', async () => {
|
|
||||||
process.env['GOOGLE_GENAI_USE_GCA'] = 'true';
|
|
||||||
const nonInteractiveConfig = {
|
|
||||||
refreshAuth: refreshAuthMock,
|
|
||||||
} as unknown as Config;
|
|
||||||
await validateNonInteractiveAuth(
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
nonInteractiveConfig,
|
|
||||||
mockSettings,
|
|
||||||
);
|
|
||||||
expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.LOGIN_WITH_GOOGLE);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('uses USE_GEMINI if GEMINI_API_KEY is set', async () => {
|
|
||||||
process.env['GEMINI_API_KEY'] = 'fake-key';
|
|
||||||
const nonInteractiveConfig = {
|
|
||||||
refreshAuth: refreshAuthMock,
|
|
||||||
} as unknown as Config;
|
|
||||||
await validateNonInteractiveAuth(
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
nonInteractiveConfig,
|
|
||||||
mockSettings,
|
|
||||||
);
|
|
||||||
expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_GEMINI);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('uses USE_OPENAI if OPENAI_API_KEY is set', async () => {
|
it('uses USE_OPENAI if OPENAI_API_KEY is set', async () => {
|
||||||
process.env['OPENAI_API_KEY'] = 'fake-openai-key';
|
process.env['OPENAI_API_KEY'] = 'fake-openai-key';
|
||||||
const nonInteractiveConfig = {
|
const nonInteractiveConfig = {
|
||||||
@@ -168,104 +140,6 @@ describe('validateNonInterActiveAuth', () => {
|
|||||||
expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.QWEN_OAUTH);
|
expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.QWEN_OAUTH);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('uses USE_VERTEX_AI if GOOGLE_GENAI_USE_VERTEXAI is true (with GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_LOCATION)', async () => {
|
|
||||||
process.env['GOOGLE_GENAI_USE_VERTEXAI'] = 'true';
|
|
||||||
process.env['GOOGLE_CLOUD_PROJECT'] = 'test-project';
|
|
||||||
process.env['GOOGLE_CLOUD_LOCATION'] = 'us-central1';
|
|
||||||
const nonInteractiveConfig = {
|
|
||||||
refreshAuth: refreshAuthMock,
|
|
||||||
} as unknown as Config;
|
|
||||||
await validateNonInteractiveAuth(
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
nonInteractiveConfig,
|
|
||||||
mockSettings,
|
|
||||||
);
|
|
||||||
expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_VERTEX_AI);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('uses USE_VERTEX_AI if GOOGLE_GENAI_USE_VERTEXAI is true and GOOGLE_API_KEY is set', async () => {
|
|
||||||
process.env['GOOGLE_GENAI_USE_VERTEXAI'] = 'true';
|
|
||||||
process.env['GOOGLE_API_KEY'] = 'vertex-api-key';
|
|
||||||
const nonInteractiveConfig = {
|
|
||||||
refreshAuth: refreshAuthMock,
|
|
||||||
} as unknown as Config;
|
|
||||||
await validateNonInteractiveAuth(
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
nonInteractiveConfig,
|
|
||||||
mockSettings,
|
|
||||||
);
|
|
||||||
expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_VERTEX_AI);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('uses LOGIN_WITH_GOOGLE if GOOGLE_GENAI_USE_GCA is set, even with other env vars', async () => {
|
|
||||||
process.env['GOOGLE_GENAI_USE_GCA'] = 'true';
|
|
||||||
process.env['GEMINI_API_KEY'] = 'fake-key';
|
|
||||||
process.env['GOOGLE_GENAI_USE_VERTEXAI'] = 'true';
|
|
||||||
process.env['GOOGLE_CLOUD_PROJECT'] = 'test-project';
|
|
||||||
process.env['GOOGLE_CLOUD_LOCATION'] = 'us-central1';
|
|
||||||
const nonInteractiveConfig = {
|
|
||||||
refreshAuth: refreshAuthMock,
|
|
||||||
} as unknown as Config;
|
|
||||||
await validateNonInteractiveAuth(
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
nonInteractiveConfig,
|
|
||||||
mockSettings,
|
|
||||||
);
|
|
||||||
expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.LOGIN_WITH_GOOGLE);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('uses USE_VERTEX_AI if both GEMINI_API_KEY and GOOGLE_GENAI_USE_VERTEXAI are set', async () => {
|
|
||||||
process.env['GEMINI_API_KEY'] = 'fake-key';
|
|
||||||
process.env['GOOGLE_GENAI_USE_VERTEXAI'] = 'true';
|
|
||||||
process.env['GOOGLE_CLOUD_PROJECT'] = 'test-project';
|
|
||||||
process.env['GOOGLE_CLOUD_LOCATION'] = 'us-central1';
|
|
||||||
const nonInteractiveConfig = {
|
|
||||||
refreshAuth: refreshAuthMock,
|
|
||||||
} as unknown as Config;
|
|
||||||
await validateNonInteractiveAuth(
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
nonInteractiveConfig,
|
|
||||||
mockSettings,
|
|
||||||
);
|
|
||||||
expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_VERTEX_AI);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('uses USE_GEMINI if GOOGLE_GENAI_USE_VERTEXAI is false, GEMINI_API_KEY is set, and project/location are available', async () => {
|
|
||||||
process.env['GOOGLE_GENAI_USE_VERTEXAI'] = 'false';
|
|
||||||
process.env['GEMINI_API_KEY'] = 'fake-key';
|
|
||||||
process.env['GOOGLE_CLOUD_PROJECT'] = 'test-project';
|
|
||||||
process.env['GOOGLE_CLOUD_LOCATION'] = 'us-central1';
|
|
||||||
const nonInteractiveConfig = {
|
|
||||||
refreshAuth: refreshAuthMock,
|
|
||||||
} as unknown as Config;
|
|
||||||
await validateNonInteractiveAuth(
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
nonInteractiveConfig,
|
|
||||||
mockSettings,
|
|
||||||
);
|
|
||||||
expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_GEMINI);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('uses configuredAuthType if provided', async () => {
|
|
||||||
// Set required env var for USE_GEMINI
|
|
||||||
process.env['GEMINI_API_KEY'] = 'fake-key';
|
|
||||||
const nonInteractiveConfig = {
|
|
||||||
refreshAuth: refreshAuthMock,
|
|
||||||
} as unknown as Config;
|
|
||||||
await validateNonInteractiveAuth(
|
|
||||||
AuthType.USE_GEMINI,
|
|
||||||
undefined,
|
|
||||||
nonInteractiveConfig,
|
|
||||||
mockSettings,
|
|
||||||
);
|
|
||||||
expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_GEMINI);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('exits if validateAuthMethod returns error', async () => {
|
it('exits if validateAuthMethod returns error', async () => {
|
||||||
// Mock validateAuthMethod to return error
|
// Mock validateAuthMethod to return error
|
||||||
vi.spyOn(auth, 'validateAuthMethod').mockReturnValue('Auth error!');
|
vi.spyOn(auth, 'validateAuthMethod').mockReturnValue('Auth error!');
|
||||||
@@ -317,26 +191,25 @@ describe('validateNonInterActiveAuth', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('uses enforcedAuthType if provided', async () => {
|
it('uses enforcedAuthType if provided', async () => {
|
||||||
mockSettings.merged.security!.auth!.enforcedType = AuthType.USE_GEMINI;
|
mockSettings.merged.security!.auth!.enforcedType = AuthType.USE_OPENAI;
|
||||||
mockSettings.merged.security!.auth!.selectedType = AuthType.USE_GEMINI;
|
mockSettings.merged.security!.auth!.selectedType = AuthType.USE_OPENAI;
|
||||||
// Set required env var for USE_GEMINI to ensure enforcedAuthType takes precedence
|
// Set required env var for USE_OPENAI to ensure enforcedAuthType takes precedence
|
||||||
process.env['GEMINI_API_KEY'] = 'fake-key';
|
process.env['OPENAI_API_KEY'] = 'fake-key';
|
||||||
const nonInteractiveConfig = {
|
const nonInteractiveConfig = {
|
||||||
refreshAuth: refreshAuthMock,
|
refreshAuth: refreshAuthMock,
|
||||||
} as unknown as Config;
|
} as unknown as Config;
|
||||||
await validateNonInteractiveAuth(
|
await validateNonInteractiveAuth(
|
||||||
AuthType.USE_GEMINI,
|
AuthType.USE_OPENAI,
|
||||||
undefined,
|
undefined,
|
||||||
nonInteractiveConfig,
|
nonInteractiveConfig,
|
||||||
mockSettings,
|
mockSettings,
|
||||||
);
|
);
|
||||||
expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_GEMINI);
|
expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_OPENAI);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('exits if currentAuthType does not match enforcedAuthType', async () => {
|
it('exits if currentAuthType does not match enforcedAuthType', async () => {
|
||||||
mockSettings.merged.security!.auth!.enforcedType =
|
mockSettings.merged.security!.auth!.enforcedType = AuthType.QWEN_OAUTH;
|
||||||
AuthType.LOGIN_WITH_GOOGLE;
|
process.env['OPENAI_API_KEY'] = 'fake-key';
|
||||||
process.env['GOOGLE_GENAI_USE_VERTEXAI'] = 'true';
|
|
||||||
const nonInteractiveConfig = {
|
const nonInteractiveConfig = {
|
||||||
refreshAuth: refreshAuthMock,
|
refreshAuth: refreshAuthMock,
|
||||||
getOutputFormat: vi.fn().mockReturnValue(OutputFormat.TEXT),
|
getOutputFormat: vi.fn().mockReturnValue(OutputFormat.TEXT),
|
||||||
@@ -346,7 +219,7 @@ describe('validateNonInterActiveAuth', () => {
|
|||||||
} as unknown as Config;
|
} as unknown as Config;
|
||||||
try {
|
try {
|
||||||
await validateNonInteractiveAuth(
|
await validateNonInteractiveAuth(
|
||||||
AuthType.USE_GEMINI,
|
AuthType.USE_OPENAI,
|
||||||
undefined,
|
undefined,
|
||||||
nonInteractiveConfig,
|
nonInteractiveConfig,
|
||||||
mockSettings,
|
mockSettings,
|
||||||
@@ -356,7 +229,7 @@ describe('validateNonInterActiveAuth', () => {
|
|||||||
expect((e as Error).message).toContain('process.exit(1) called');
|
expect((e as Error).message).toContain('process.exit(1) called');
|
||||||
}
|
}
|
||||||
expect(consoleErrorSpy).toHaveBeenCalledWith(
|
expect(consoleErrorSpy).toHaveBeenCalledWith(
|
||||||
'The configured auth type is oauth-personal, but the current auth type is vertex-ai. Please re-authenticate with the correct type.',
|
'The configured auth type is qwen-oauth, but the current auth type is openai. Please re-authenticate with the correct type.',
|
||||||
);
|
);
|
||||||
expect(processExitSpy).toHaveBeenCalledWith(1);
|
expect(processExitSpy).toHaveBeenCalledWith(1);
|
||||||
});
|
});
|
||||||
@@ -394,8 +267,8 @@ describe('validateNonInterActiveAuth', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('prints JSON error when enforced auth mismatches current auth and exits with code 1', async () => {
|
it('prints JSON error when enforced auth mismatches current auth and exits with code 1', async () => {
|
||||||
mockSettings.merged.security!.auth!.enforcedType = AuthType.USE_GEMINI;
|
mockSettings.merged.security!.auth!.enforcedType = AuthType.QWEN_OAUTH;
|
||||||
process.env['GOOGLE_GENAI_USE_GCA'] = 'true';
|
process.env['OPENAI_API_KEY'] = 'fake-key';
|
||||||
|
|
||||||
const nonInteractiveConfig = {
|
const nonInteractiveConfig = {
|
||||||
refreshAuth: refreshAuthMock,
|
refreshAuth: refreshAuthMock,
|
||||||
@@ -424,14 +297,14 @@ describe('validateNonInterActiveAuth', () => {
|
|||||||
expect(payload.error.type).toBe('Error');
|
expect(payload.error.type).toBe('Error');
|
||||||
expect(payload.error.code).toBe(1);
|
expect(payload.error.code).toBe(1);
|
||||||
expect(payload.error.message).toContain(
|
expect(payload.error.message).toContain(
|
||||||
'The configured auth type is gemini-api-key, but the current auth type is oauth-personal.',
|
'The configured auth type is qwen-oauth, but the current auth type is openai.',
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
it('prints JSON error when validateAuthMethod fails and exits with code 1', async () => {
|
it('prints JSON error when validateAuthMethod fails and exits with code 1', async () => {
|
||||||
vi.spyOn(auth, 'validateAuthMethod').mockReturnValue('Auth error!');
|
vi.spyOn(auth, 'validateAuthMethod').mockReturnValue('Auth error!');
|
||||||
process.env['GEMINI_API_KEY'] = 'fake-key';
|
process.env['OPENAI_API_KEY'] = 'fake-key';
|
||||||
|
|
||||||
const nonInteractiveConfig = {
|
const nonInteractiveConfig = {
|
||||||
refreshAuth: refreshAuthMock,
|
refreshAuth: refreshAuthMock,
|
||||||
@@ -444,7 +317,7 @@ describe('validateNonInterActiveAuth', () => {
|
|||||||
let thrown: Error | undefined;
|
let thrown: Error | undefined;
|
||||||
try {
|
try {
|
||||||
await validateNonInteractiveAuth(
|
await validateNonInteractiveAuth(
|
||||||
AuthType.USE_GEMINI,
|
AuthType.USE_OPENAI,
|
||||||
undefined,
|
undefined,
|
||||||
nonInteractiveConfig,
|
nonInteractiveConfig,
|
||||||
mockSettings,
|
mockSettings,
|
||||||
|
|||||||
@@ -12,21 +12,13 @@ import { type LoadedSettings } from './config/settings.js';
|
|||||||
import { handleError } from './utils/errors.js';
|
import { handleError } from './utils/errors.js';
|
||||||
|
|
||||||
function getAuthTypeFromEnv(): AuthType | undefined {
|
function getAuthTypeFromEnv(): AuthType | undefined {
|
||||||
if (process.env['GOOGLE_GENAI_USE_GCA'] === 'true') {
|
|
||||||
return AuthType.LOGIN_WITH_GOOGLE;
|
|
||||||
}
|
|
||||||
if (process.env['GOOGLE_GENAI_USE_VERTEXAI'] === 'true') {
|
|
||||||
return AuthType.USE_VERTEX_AI;
|
|
||||||
}
|
|
||||||
if (process.env['GEMINI_API_KEY']) {
|
|
||||||
return AuthType.USE_GEMINI;
|
|
||||||
}
|
|
||||||
if (process.env['OPENAI_API_KEY']) {
|
if (process.env['OPENAI_API_KEY']) {
|
||||||
return AuthType.USE_OPENAI;
|
return AuthType.USE_OPENAI;
|
||||||
}
|
}
|
||||||
if (process.env['QWEN_OAUTH']) {
|
if (process.env['QWEN_OAUTH']) {
|
||||||
return AuthType.QWEN_OAUTH;
|
return AuthType.QWEN_OAUTH;
|
||||||
}
|
}
|
||||||
|
|
||||||
return undefined;
|
return undefined;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -7,7 +7,6 @@
|
|||||||
/* ACP defines a schema for a simple (experimental) JSON-RPC protocol that allows GUI applications to interact with agents. */
|
/* ACP defines a schema for a simple (experimental) JSON-RPC protocol that allows GUI applications to interact with agents. */
|
||||||
|
|
||||||
import { z } from 'zod';
|
import { z } from 'zod';
|
||||||
import { EOL } from 'node:os';
|
|
||||||
import * as schema from './schema.js';
|
import * as schema from './schema.js';
|
||||||
export * from './schema.js';
|
export * from './schema.js';
|
||||||
|
|
||||||
@@ -173,7 +172,7 @@ class Connection {
|
|||||||
const decoder = new TextDecoder();
|
const decoder = new TextDecoder();
|
||||||
for await (const chunk of output) {
|
for await (const chunk of output) {
|
||||||
content += decoder.decode(chunk, { stream: true });
|
content += decoder.decode(chunk, { stream: true });
|
||||||
const lines = content.split(EOL);
|
const lines = content.split('\n');
|
||||||
content = lines.pop() || '';
|
content = lines.pop() || '';
|
||||||
|
|
||||||
for (const line of lines) {
|
for (const line of lines) {
|
||||||
|
|||||||
@@ -12,6 +12,12 @@ import type {
|
|||||||
GeminiChat,
|
GeminiChat,
|
||||||
ToolCallConfirmationDetails,
|
ToolCallConfirmationDetails,
|
||||||
ToolResult,
|
ToolResult,
|
||||||
|
SubAgentEventEmitter,
|
||||||
|
SubAgentToolCallEvent,
|
||||||
|
SubAgentToolResultEvent,
|
||||||
|
SubAgentApprovalRequestEvent,
|
||||||
|
AnyDeclarativeTool,
|
||||||
|
AnyToolInvocation,
|
||||||
} from '@qwen-code/qwen-code-core';
|
} from '@qwen-code/qwen-code-core';
|
||||||
import {
|
import {
|
||||||
AuthType,
|
AuthType,
|
||||||
@@ -28,6 +34,10 @@ import {
|
|||||||
getErrorStatus,
|
getErrorStatus,
|
||||||
isWithinRoot,
|
isWithinRoot,
|
||||||
isNodeError,
|
isNodeError,
|
||||||
|
SubAgentEventType,
|
||||||
|
TaskTool,
|
||||||
|
Kind,
|
||||||
|
TodoWriteTool,
|
||||||
} from '@qwen-code/qwen-code-core';
|
} from '@qwen-code/qwen-code-core';
|
||||||
import * as acp from './acp.js';
|
import * as acp from './acp.js';
|
||||||
import { AcpFileSystemService } from './fileSystemService.js';
|
import { AcpFileSystemService } from './fileSystemService.js';
|
||||||
@@ -403,9 +413,34 @@ class Session {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Detect TodoWriteTool early - route to plan updates instead of tool_call events
|
||||||
|
const isTodoWriteTool =
|
||||||
|
fc.name === TodoWriteTool.Name || tool.name === TodoWriteTool.Name;
|
||||||
|
|
||||||
|
// Declare subAgentToolEventListeners outside try block for cleanup in catch
|
||||||
|
let subAgentToolEventListeners: Array<() => void> = [];
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const invocation = tool.build(args);
|
const invocation = tool.build(args);
|
||||||
|
|
||||||
|
// Detect TaskTool and set up sub-agent tool tracking
|
||||||
|
const isTaskTool = tool.name === TaskTool.Name;
|
||||||
|
|
||||||
|
if (isTaskTool && 'eventEmitter' in invocation) {
|
||||||
|
// Access eventEmitter from TaskTool invocation
|
||||||
|
const taskEventEmitter = (
|
||||||
|
invocation as {
|
||||||
|
eventEmitter: SubAgentEventEmitter;
|
||||||
|
}
|
||||||
|
).eventEmitter;
|
||||||
|
|
||||||
|
// Set up sub-agent tool tracking
|
||||||
|
subAgentToolEventListeners = this.setupSubAgentToolTracking(
|
||||||
|
taskEventEmitter,
|
||||||
|
abortSignal,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
const confirmationDetails =
|
const confirmationDetails =
|
||||||
await invocation.shouldConfirmExecute(abortSignal);
|
await invocation.shouldConfirmExecute(abortSignal);
|
||||||
|
|
||||||
@@ -460,7 +495,8 @@ class Session {
|
|||||||
throw new Error(`Unexpected: ${resultOutcome}`);
|
throw new Error(`Unexpected: ${resultOutcome}`);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else if (!isTodoWriteTool) {
|
||||||
|
// Skip tool_call event for TodoWriteTool
|
||||||
await this.sendUpdate({
|
await this.sendUpdate({
|
||||||
sessionUpdate: 'tool_call',
|
sessionUpdate: 'tool_call',
|
||||||
toolCallId: callId,
|
toolCallId: callId,
|
||||||
@@ -473,14 +509,61 @@ class Session {
|
|||||||
}
|
}
|
||||||
|
|
||||||
const toolResult: ToolResult = await invocation.execute(abortSignal);
|
const toolResult: ToolResult = await invocation.execute(abortSignal);
|
||||||
const content = toToolCallContent(toolResult);
|
|
||||||
|
|
||||||
await this.sendUpdate({
|
// Clean up event listeners
|
||||||
sessionUpdate: 'tool_call_update',
|
subAgentToolEventListeners.forEach((cleanup) => cleanup());
|
||||||
toolCallId: callId,
|
|
||||||
status: 'completed',
|
// Handle TodoWriteTool: extract todos and send plan update
|
||||||
content: content ? [content] : [],
|
if (isTodoWriteTool) {
|
||||||
});
|
// Extract todos from args (initial state)
|
||||||
|
let todos: Array<{
|
||||||
|
id: string;
|
||||||
|
content: string;
|
||||||
|
status: 'pending' | 'in_progress' | 'completed';
|
||||||
|
}> = [];
|
||||||
|
|
||||||
|
if (Array.isArray(args['todos'])) {
|
||||||
|
todos = args['todos'] as Array<{
|
||||||
|
id: string;
|
||||||
|
content: string;
|
||||||
|
status: 'pending' | 'in_progress' | 'completed';
|
||||||
|
}>;
|
||||||
|
}
|
||||||
|
|
||||||
|
// If returnDisplay has todos (e.g., modified by user), use those instead
|
||||||
|
if (
|
||||||
|
toolResult.returnDisplay &&
|
||||||
|
typeof toolResult.returnDisplay === 'object' &&
|
||||||
|
'type' in toolResult.returnDisplay &&
|
||||||
|
toolResult.returnDisplay.type === 'todo_list' &&
|
||||||
|
'todos' in toolResult.returnDisplay &&
|
||||||
|
Array.isArray(toolResult.returnDisplay.todos)
|
||||||
|
) {
|
||||||
|
todos = toolResult.returnDisplay.todos;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert todos to plan entries and send plan update
|
||||||
|
if (todos.length > 0 || Array.isArray(args['todos'])) {
|
||||||
|
const planEntries = convertTodosToPlanEntries(todos);
|
||||||
|
await this.sendUpdate({
|
||||||
|
sessionUpdate: 'plan',
|
||||||
|
entries: planEntries,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Skip tool_call_update event for TodoWriteTool
|
||||||
|
// Still log and return function response for LLM
|
||||||
|
} else {
|
||||||
|
// Normal tool handling: send tool_call_update
|
||||||
|
const content = toToolCallContent(toolResult);
|
||||||
|
|
||||||
|
await this.sendUpdate({
|
||||||
|
sessionUpdate: 'tool_call_update',
|
||||||
|
toolCallId: callId,
|
||||||
|
status: 'completed',
|
||||||
|
content: content ? [content] : [],
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
const durationMs = Date.now() - startTime;
|
const durationMs = Date.now() - startTime;
|
||||||
logToolCall(this.config, {
|
logToolCall(this.config, {
|
||||||
@@ -500,6 +583,9 @@ class Session {
|
|||||||
|
|
||||||
return convertToFunctionResponse(fc.name, callId, toolResult.llmContent);
|
return convertToFunctionResponse(fc.name, callId, toolResult.llmContent);
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
|
// Ensure cleanup on error
|
||||||
|
subAgentToolEventListeners.forEach((cleanup) => cleanup());
|
||||||
|
|
||||||
const error = e instanceof Error ? e : new Error(String(e));
|
const error = e instanceof Error ? e : new Error(String(e));
|
||||||
|
|
||||||
await this.sendUpdate({
|
await this.sendUpdate({
|
||||||
@@ -515,6 +601,300 @@ class Session {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Sets up event listeners to track sub-agent tool calls within a TaskTool execution.
|
||||||
|
* Converts subagent tool call events into zedIntegration session updates.
|
||||||
|
*
|
||||||
|
* @param eventEmitter - The SubAgentEventEmitter from TaskTool
|
||||||
|
* @param abortSignal - Signal to abort tracking if parent is cancelled
|
||||||
|
* @returns Array of cleanup functions to remove event listeners
|
||||||
|
*/
|
||||||
|
private setupSubAgentToolTracking(
|
||||||
|
eventEmitter: SubAgentEventEmitter,
|
||||||
|
abortSignal: AbortSignal,
|
||||||
|
): Array<() => void> {
|
||||||
|
const cleanupFunctions: Array<() => void> = [];
|
||||||
|
const toolRegistry = this.config.getToolRegistry();
|
||||||
|
|
||||||
|
// Track subagent tool call states
|
||||||
|
const subAgentToolStates = new Map<
|
||||||
|
string,
|
||||||
|
{
|
||||||
|
tool?: AnyDeclarativeTool;
|
||||||
|
invocation?: AnyToolInvocation;
|
||||||
|
args?: Record<string, unknown>;
|
||||||
|
}
|
||||||
|
>();
|
||||||
|
|
||||||
|
// Listen for tool call start
|
||||||
|
const onToolCall = (...args: unknown[]) => {
|
||||||
|
const event = args[0] as SubAgentToolCallEvent;
|
||||||
|
if (abortSignal.aborted) return;
|
||||||
|
|
||||||
|
const subAgentTool = toolRegistry.getTool(event.name);
|
||||||
|
let subAgentInvocation: AnyToolInvocation | undefined;
|
||||||
|
let toolKind: acp.ToolKind = 'other';
|
||||||
|
let locations: acp.ToolCallLocation[] = [];
|
||||||
|
|
||||||
|
if (subAgentTool) {
|
||||||
|
try {
|
||||||
|
subAgentInvocation = subAgentTool.build(event.args);
|
||||||
|
toolKind = this.mapToolKind(subAgentTool.kind);
|
||||||
|
locations = subAgentInvocation.toolLocations().map((loc) => ({
|
||||||
|
path: loc.path,
|
||||||
|
line: loc.line ?? null,
|
||||||
|
}));
|
||||||
|
} catch (e) {
|
||||||
|
// If building fails, continue with defaults
|
||||||
|
console.warn(`Failed to build subagent tool ${event.name}:`, e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save state for subsequent updates
|
||||||
|
subAgentToolStates.set(event.callId, {
|
||||||
|
tool: subAgentTool,
|
||||||
|
invocation: subAgentInvocation,
|
||||||
|
args: event.args,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Check if this is TodoWriteTool - if so, skip sending tool_call event
|
||||||
|
// Plan update will be sent in onToolResult when we have the final state
|
||||||
|
if (event.name === TodoWriteTool.Name) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Send tool call start update with rawInput
|
||||||
|
void this.sendUpdate({
|
||||||
|
sessionUpdate: 'tool_call',
|
||||||
|
toolCallId: event.callId,
|
||||||
|
status: 'in_progress',
|
||||||
|
title: event.description || event.name,
|
||||||
|
content: [],
|
||||||
|
locations,
|
||||||
|
kind: toolKind,
|
||||||
|
rawInput: event.args,
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
// Listen for tool call result
|
||||||
|
const onToolResult = (...args: unknown[]) => {
|
||||||
|
const event = args[0] as SubAgentToolResultEvent;
|
||||||
|
if (abortSignal.aborted) return;
|
||||||
|
|
||||||
|
const state = subAgentToolStates.get(event.callId);
|
||||||
|
|
||||||
|
// Check if this is TodoWriteTool - if so, route to plan updates
|
||||||
|
if (event.name === TodoWriteTool.Name) {
|
||||||
|
let todos:
|
||||||
|
| Array<{
|
||||||
|
id: string;
|
||||||
|
content: string;
|
||||||
|
status: 'pending' | 'in_progress' | 'completed';
|
||||||
|
}>
|
||||||
|
| undefined;
|
||||||
|
|
||||||
|
// Try to extract todos from resultDisplay first (final state)
|
||||||
|
if (event.resultDisplay) {
|
||||||
|
try {
|
||||||
|
// resultDisplay might be a JSON stringified object
|
||||||
|
const parsed =
|
||||||
|
typeof event.resultDisplay === 'string'
|
||||||
|
? JSON.parse(event.resultDisplay)
|
||||||
|
: event.resultDisplay;
|
||||||
|
|
||||||
|
if (
|
||||||
|
typeof parsed === 'object' &&
|
||||||
|
parsed !== null &&
|
||||||
|
'type' in parsed &&
|
||||||
|
parsed.type === 'todo_list' &&
|
||||||
|
'todos' in parsed &&
|
||||||
|
Array.isArray(parsed.todos)
|
||||||
|
) {
|
||||||
|
todos = parsed.todos;
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
// If parsing fails, ignore - resultDisplay might not be JSON
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback to args if resultDisplay doesn't have todos
|
||||||
|
if (!todos && state?.args && Array.isArray(state.args['todos'])) {
|
||||||
|
todos = state.args['todos'] as Array<{
|
||||||
|
id: string;
|
||||||
|
content: string;
|
||||||
|
status: 'pending' | 'in_progress' | 'completed';
|
||||||
|
}>;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Send plan update if we have todos
|
||||||
|
if (todos) {
|
||||||
|
const planEntries = convertTodosToPlanEntries(todos);
|
||||||
|
void this.sendUpdate({
|
||||||
|
sessionUpdate: 'plan',
|
||||||
|
entries: planEntries,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Skip sending tool_call_update event for TodoWriteTool
|
||||||
|
// Clean up state
|
||||||
|
subAgentToolStates.delete(event.callId);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
let content: acp.ToolCallContent[] = [];
|
||||||
|
|
||||||
|
// If there's a result display, try to convert to ToolCallContent
|
||||||
|
if (event.resultDisplay && state?.invocation) {
|
||||||
|
// resultDisplay is typically a string
|
||||||
|
if (typeof event.resultDisplay === 'string') {
|
||||||
|
content = [
|
||||||
|
{
|
||||||
|
type: 'content',
|
||||||
|
content: {
|
||||||
|
type: 'text',
|
||||||
|
text: event.resultDisplay,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Send tool call completion update
|
||||||
|
void this.sendUpdate({
|
||||||
|
sessionUpdate: 'tool_call_update',
|
||||||
|
toolCallId: event.callId,
|
||||||
|
status: event.success ? 'completed' : 'failed',
|
||||||
|
content: content.length > 0 ? content : [],
|
||||||
|
title: state?.invocation?.getDescription() ?? event.name,
|
||||||
|
kind: state?.tool ? this.mapToolKind(state.tool.kind) : null,
|
||||||
|
locations:
|
||||||
|
state?.invocation?.toolLocations().map((loc) => ({
|
||||||
|
path: loc.path,
|
||||||
|
line: loc.line ?? null,
|
||||||
|
})) ?? null,
|
||||||
|
rawInput: state?.args,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Clean up state
|
||||||
|
subAgentToolStates.delete(event.callId);
|
||||||
|
};
|
||||||
|
|
||||||
|
// Listen for permission requests
|
||||||
|
const onToolWaitingApproval = async (...args: unknown[]) => {
|
||||||
|
const event = args[0] as SubAgentApprovalRequestEvent;
|
||||||
|
if (abortSignal.aborted) return;
|
||||||
|
|
||||||
|
const state = subAgentToolStates.get(event.callId);
|
||||||
|
const content: acp.ToolCallContent[] = [];
|
||||||
|
|
||||||
|
// Handle different confirmation types
|
||||||
|
if (event.confirmationDetails.type === 'edit') {
|
||||||
|
const editDetails = event.confirmationDetails as unknown as {
|
||||||
|
type: 'edit';
|
||||||
|
fileName: string;
|
||||||
|
originalContent: string | null;
|
||||||
|
newContent: string;
|
||||||
|
};
|
||||||
|
content.push({
|
||||||
|
type: 'diff',
|
||||||
|
path: editDetails.fileName,
|
||||||
|
oldText: editDetails.originalContent ?? '',
|
||||||
|
newText: editDetails.newContent,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build permission request options from confirmation details
|
||||||
|
// event.confirmationDetails already contains all fields except onConfirm,
|
||||||
|
// which we add here to satisfy the type requirement for toPermissionOptions
|
||||||
|
const fullConfirmationDetails = {
|
||||||
|
...event.confirmationDetails,
|
||||||
|
onConfirm: async () => {
|
||||||
|
// This is a placeholder - the actual response is handled via event.respond
|
||||||
|
},
|
||||||
|
} as unknown as ToolCallConfirmationDetails;
|
||||||
|
|
||||||
|
const params: acp.RequestPermissionRequest = {
|
||||||
|
sessionId: this.id,
|
||||||
|
options: toPermissionOptions(fullConfirmationDetails),
|
||||||
|
toolCall: {
|
||||||
|
toolCallId: event.callId,
|
||||||
|
status: 'pending',
|
||||||
|
title: event.description || event.name,
|
||||||
|
content,
|
||||||
|
locations:
|
||||||
|
state?.invocation?.toolLocations().map((loc) => ({
|
||||||
|
path: loc.path,
|
||||||
|
line: loc.line ?? null,
|
||||||
|
})) ?? [],
|
||||||
|
kind: state?.tool ? this.mapToolKind(state.tool.kind) : 'other',
|
||||||
|
rawInput: state?.args,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Request permission from zed client
|
||||||
|
const output = await this.client.requestPermission(params);
|
||||||
|
const outcome =
|
||||||
|
output.outcome.outcome === 'cancelled'
|
||||||
|
? ToolConfirmationOutcome.Cancel
|
||||||
|
: z
|
||||||
|
.nativeEnum(ToolConfirmationOutcome)
|
||||||
|
.parse(output.outcome.optionId);
|
||||||
|
|
||||||
|
// Respond to subagent with the outcome
|
||||||
|
await event.respond(outcome);
|
||||||
|
} catch (error) {
|
||||||
|
// If permission request fails, cancel the tool call
|
||||||
|
console.error(
|
||||||
|
`Permission request failed for subagent tool ${event.name}:`,
|
||||||
|
error,
|
||||||
|
);
|
||||||
|
await event.respond(ToolConfirmationOutcome.Cancel);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Register event listeners
|
||||||
|
eventEmitter.on(SubAgentEventType.TOOL_CALL, onToolCall);
|
||||||
|
eventEmitter.on(SubAgentEventType.TOOL_RESULT, onToolResult);
|
||||||
|
eventEmitter.on(
|
||||||
|
SubAgentEventType.TOOL_WAITING_APPROVAL,
|
||||||
|
onToolWaitingApproval,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Return cleanup functions
|
||||||
|
cleanupFunctions.push(() => {
|
||||||
|
eventEmitter.off(SubAgentEventType.TOOL_CALL, onToolCall);
|
||||||
|
eventEmitter.off(SubAgentEventType.TOOL_RESULT, onToolResult);
|
||||||
|
eventEmitter.off(
|
||||||
|
SubAgentEventType.TOOL_WAITING_APPROVAL,
|
||||||
|
onToolWaitingApproval,
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
return cleanupFunctions;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Maps core Tool Kind enum to ACP ToolKind string literals.
|
||||||
|
*
|
||||||
|
* @param kind - The core Kind enum value
|
||||||
|
* @returns The corresponding ACP ToolKind string literal
|
||||||
|
*/
|
||||||
|
private mapToolKind(kind: Kind): acp.ToolKind {
|
||||||
|
const kindMap: Record<Kind, acp.ToolKind> = {
|
||||||
|
[Kind.Read]: 'read',
|
||||||
|
[Kind.Edit]: 'edit',
|
||||||
|
[Kind.Delete]: 'delete',
|
||||||
|
[Kind.Move]: 'move',
|
||||||
|
[Kind.Search]: 'search',
|
||||||
|
[Kind.Execute]: 'execute',
|
||||||
|
[Kind.Think]: 'think',
|
||||||
|
[Kind.Fetch]: 'fetch',
|
||||||
|
[Kind.Other]: 'other',
|
||||||
|
};
|
||||||
|
return kindMap[kind] ?? 'other';
|
||||||
|
}
|
||||||
|
|
||||||
async #resolvePrompt(
|
async #resolvePrompt(
|
||||||
message: acp.ContentBlock[],
|
message: acp.ContentBlock[],
|
||||||
abortSignal: AbortSignal,
|
abortSignal: AbortSignal,
|
||||||
@@ -859,6 +1239,27 @@ class Session {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Converts todo items to plan entries format for zed integration.
|
||||||
|
* Maps todo status to plan status and assigns a default priority.
|
||||||
|
*
|
||||||
|
* @param todos - Array of todo items with id, content, and status
|
||||||
|
* @returns Array of plan entries with content, priority, and status
|
||||||
|
*/
|
||||||
|
function convertTodosToPlanEntries(
|
||||||
|
todos: Array<{
|
||||||
|
id: string;
|
||||||
|
content: string;
|
||||||
|
status: 'pending' | 'in_progress' | 'completed';
|
||||||
|
}>,
|
||||||
|
): acp.PlanEntry[] {
|
||||||
|
return todos.map((todo) => ({
|
||||||
|
content: todo.content,
|
||||||
|
priority: 'medium' as const, // Default priority since todos don't have priority
|
||||||
|
status: todo.status,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
function toToolCallContent(toolResult: ToolResult): acp.ToolCallContent | null {
|
function toToolCallContent(toolResult: ToolResult): acp.ToolCallContent | null {
|
||||||
if (toolResult.error?.message) {
|
if (toolResult.error?.message) {
|
||||||
throw new Error(toolResult.error.message);
|
throw new Error(toolResult.error.message);
|
||||||
@@ -870,26 +1271,6 @@ function toToolCallContent(toolResult: ToolResult): acp.ToolCallContent | null {
|
|||||||
type: 'content',
|
type: 'content',
|
||||||
content: { type: 'text', text: toolResult.returnDisplay },
|
content: { type: 'text', text: toolResult.returnDisplay },
|
||||||
};
|
};
|
||||||
} else if (
|
|
||||||
'type' in toolResult.returnDisplay &&
|
|
||||||
toolResult.returnDisplay.type === 'todo_list'
|
|
||||||
) {
|
|
||||||
// Handle TodoResultDisplay - convert to text representation
|
|
||||||
const todoText = toolResult.returnDisplay.todos
|
|
||||||
.map((todo) => {
|
|
||||||
const statusIcon = {
|
|
||||||
pending: '○',
|
|
||||||
in_progress: '◐',
|
|
||||||
completed: '●',
|
|
||||||
}[todo.status];
|
|
||||||
return `${statusIcon} ${todo.content}`;
|
|
||||||
})
|
|
||||||
.join('\n');
|
|
||||||
|
|
||||||
return {
|
|
||||||
type: 'content',
|
|
||||||
content: { type: 'text', text: todoText },
|
|
||||||
};
|
|
||||||
} else if (
|
} else if (
|
||||||
'type' in toolResult.returnDisplay &&
|
'type' in toolResult.returnDisplay &&
|
||||||
toolResult.returnDisplay.type === 'plan_summary'
|
toolResult.returnDisplay.type === 'plan_summary'
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "@qwen-code/qwen-code-core",
|
"name": "@qwen-code/qwen-code-core",
|
||||||
"version": "0.1.0",
|
"version": "0.2.1",
|
||||||
"description": "Qwen Code Core",
|
"description": "Qwen Code Core",
|
||||||
"repository": {
|
"repository": {
|
||||||
"type": "git",
|
"type": "git",
|
||||||
|
|||||||
@@ -16,6 +16,7 @@ import {
|
|||||||
QwenLogger,
|
QwenLogger,
|
||||||
} from '../telemetry/index.js';
|
} from '../telemetry/index.js';
|
||||||
import type { ContentGeneratorConfig } from '../core/contentGenerator.js';
|
import type { ContentGeneratorConfig } from '../core/contentGenerator.js';
|
||||||
|
import { DEFAULT_DASHSCOPE_BASE_URL } from '../core/openaiContentGenerator/constants.js';
|
||||||
import {
|
import {
|
||||||
AuthType,
|
AuthType,
|
||||||
createContentGeneratorConfig,
|
createContentGeneratorConfig,
|
||||||
@@ -44,6 +45,15 @@ import { logRipgrepFallback } from '../telemetry/loggers.js';
|
|||||||
import { RipgrepFallbackEvent } from '../telemetry/types.js';
|
import { RipgrepFallbackEvent } from '../telemetry/types.js';
|
||||||
import { ToolRegistry } from '../tools/tool-registry.js';
|
import { ToolRegistry } from '../tools/tool-registry.js';
|
||||||
|
|
||||||
|
function createToolMock(toolName: string) {
|
||||||
|
const ToolMock = vi.fn();
|
||||||
|
Object.defineProperty(ToolMock, 'Name', {
|
||||||
|
value: toolName,
|
||||||
|
writable: true,
|
||||||
|
});
|
||||||
|
return ToolMock;
|
||||||
|
}
|
||||||
|
|
||||||
vi.mock('fs', async (importOriginal) => {
|
vi.mock('fs', async (importOriginal) => {
|
||||||
const actual = await importOriginal<typeof import('fs')>();
|
const actual = await importOriginal<typeof import('fs')>();
|
||||||
return {
|
return {
|
||||||
@@ -72,23 +82,41 @@ vi.mock('../utils/memoryDiscovery.js', () => ({
|
|||||||
}));
|
}));
|
||||||
|
|
||||||
// Mock individual tools if their constructors are complex or have side effects
|
// Mock individual tools if their constructors are complex or have side effects
|
||||||
vi.mock('../tools/ls');
|
vi.mock('../tools/ls', () => ({
|
||||||
vi.mock('../tools/read-file');
|
LSTool: createToolMock('list_directory'),
|
||||||
vi.mock('../tools/grep.js');
|
}));
|
||||||
|
vi.mock('../tools/read-file', () => ({
|
||||||
|
ReadFileTool: createToolMock('read_file'),
|
||||||
|
}));
|
||||||
|
vi.mock('../tools/grep.js', () => ({
|
||||||
|
GrepTool: createToolMock('grep_search'),
|
||||||
|
}));
|
||||||
vi.mock('../tools/ripGrep.js', () => ({
|
vi.mock('../tools/ripGrep.js', () => ({
|
||||||
RipGrepTool: class MockRipGrepTool {},
|
RipGrepTool: createToolMock('grep_search'),
|
||||||
}));
|
}));
|
||||||
vi.mock('../utils/ripgrepUtils.js', () => ({
|
vi.mock('../utils/ripgrepUtils.js', () => ({
|
||||||
canUseRipgrep: vi.fn(),
|
canUseRipgrep: vi.fn(),
|
||||||
}));
|
}));
|
||||||
vi.mock('../tools/glob');
|
vi.mock('../tools/glob', () => ({
|
||||||
vi.mock('../tools/edit');
|
GlobTool: createToolMock('glob'),
|
||||||
vi.mock('../tools/shell');
|
}));
|
||||||
vi.mock('../tools/write-file');
|
vi.mock('../tools/edit', () => ({
|
||||||
vi.mock('../tools/web-fetch');
|
EditTool: createToolMock('edit'),
|
||||||
vi.mock('../tools/read-many-files');
|
}));
|
||||||
|
vi.mock('../tools/shell', () => ({
|
||||||
|
ShellTool: createToolMock('run_shell_command'),
|
||||||
|
}));
|
||||||
|
vi.mock('../tools/write-file', () => ({
|
||||||
|
WriteFileTool: createToolMock('write_file'),
|
||||||
|
}));
|
||||||
|
vi.mock('../tools/web-fetch', () => ({
|
||||||
|
WebFetchTool: createToolMock('web_fetch'),
|
||||||
|
}));
|
||||||
|
vi.mock('../tools/read-many-files', () => ({
|
||||||
|
ReadManyFilesTool: createToolMock('read_many_files'),
|
||||||
|
}));
|
||||||
vi.mock('../tools/memoryTool', () => ({
|
vi.mock('../tools/memoryTool', () => ({
|
||||||
MemoryTool: vi.fn(),
|
MemoryTool: createToolMock('save_memory'),
|
||||||
setGeminiMdFilename: vi.fn(),
|
setGeminiMdFilename: vi.fn(),
|
||||||
getCurrentGeminiMdFilename: vi.fn(() => 'QWEN.md'), // Mock the original filename
|
getCurrentGeminiMdFilename: vi.fn(() => 'QWEN.md'), // Mock the original filename
|
||||||
DEFAULT_CONTEXT_FILENAME: 'QWEN.md',
|
DEFAULT_CONTEXT_FILENAME: 'QWEN.md',
|
||||||
@@ -153,6 +181,11 @@ vi.mock('../core/tokenLimits.js', () => ({
|
|||||||
|
|
||||||
describe('Server Config (config.ts)', () => {
|
describe('Server Config (config.ts)', () => {
|
||||||
const MODEL = 'qwen3-coder-plus';
|
const MODEL = 'qwen3-coder-plus';
|
||||||
|
|
||||||
|
// Default mock for canUseRipgrep to return true (tests that care about ripgrep will override this)
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.mocked(canUseRipgrep).mockResolvedValue(true);
|
||||||
|
});
|
||||||
const SANDBOX: SandboxConfig = {
|
const SANDBOX: SandboxConfig = {
|
||||||
command: 'docker',
|
command: 'docker',
|
||||||
image: 'qwen-code-sandbox',
|
image: 'qwen-code-sandbox',
|
||||||
@@ -250,6 +283,7 @@ describe('Server Config (config.ts)', () => {
|
|||||||
authType,
|
authType,
|
||||||
{
|
{
|
||||||
model: MODEL,
|
model: MODEL,
|
||||||
|
baseUrl: DEFAULT_DASHSCOPE_BASE_URL,
|
||||||
},
|
},
|
||||||
);
|
);
|
||||||
// Verify that contentGeneratorConfig is updated
|
// Verify that contentGeneratorConfig is updated
|
||||||
@@ -576,11 +610,45 @@ describe('Server Config (config.ts)', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
|
describe('UseBuiltinRipgrep Configuration', () => {
|
||||||
|
it('should default useBuiltinRipgrep to true when not provided', () => {
|
||||||
|
const config = new Config(baseParams);
|
||||||
|
expect(config.getUseBuiltinRipgrep()).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should set useBuiltinRipgrep to false when provided as false', () => {
|
||||||
|
const paramsWithBuiltinRipgrep: ConfigParameters = {
|
||||||
|
...baseParams,
|
||||||
|
useBuiltinRipgrep: false,
|
||||||
|
};
|
||||||
|
const config = new Config(paramsWithBuiltinRipgrep);
|
||||||
|
expect(config.getUseBuiltinRipgrep()).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should set useBuiltinRipgrep to true when explicitly provided as true', () => {
|
||||||
|
const paramsWithBuiltinRipgrep: ConfigParameters = {
|
||||||
|
...baseParams,
|
||||||
|
useBuiltinRipgrep: true,
|
||||||
|
};
|
||||||
|
const config = new Config(paramsWithBuiltinRipgrep);
|
||||||
|
expect(config.getUseBuiltinRipgrep()).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should default useBuiltinRipgrep to true when undefined', () => {
|
||||||
|
const paramsWithUndefinedBuiltinRipgrep: ConfigParameters = {
|
||||||
|
...baseParams,
|
||||||
|
useBuiltinRipgrep: undefined,
|
||||||
|
};
|
||||||
|
const config = new Config(paramsWithUndefinedBuiltinRipgrep);
|
||||||
|
expect(config.getUseBuiltinRipgrep()).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
describe('createToolRegistry', () => {
|
describe('createToolRegistry', () => {
|
||||||
it('should register a tool if coreTools contains an argument-specific pattern', async () => {
|
it('should register a tool if coreTools contains an argument-specific pattern', async () => {
|
||||||
const params: ConfigParameters = {
|
const params: ConfigParameters = {
|
||||||
...baseParams,
|
...baseParams,
|
||||||
coreTools: ['ShellTool(git status)'],
|
coreTools: ['Shell(git status)'], // Use display name instead of class name
|
||||||
};
|
};
|
||||||
const config = new Config(params);
|
const config = new Config(params);
|
||||||
await config.initialize();
|
await config.initialize();
|
||||||
@@ -605,6 +673,89 @@ describe('Server Config (config.ts)', () => {
|
|||||||
expect(wasReadFileToolRegistered).toBe(false);
|
expect(wasReadFileToolRegistered).toBe(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should register a tool if coreTools contains the displayName', async () => {
|
||||||
|
const params: ConfigParameters = {
|
||||||
|
...baseParams,
|
||||||
|
coreTools: ['Shell'],
|
||||||
|
};
|
||||||
|
const config = new Config(params);
|
||||||
|
await config.initialize();
|
||||||
|
|
||||||
|
const registerToolMock = (
|
||||||
|
(await vi.importMock('../tools/tool-registry')) as {
|
||||||
|
ToolRegistry: { prototype: { registerTool: Mock } };
|
||||||
|
}
|
||||||
|
).ToolRegistry.prototype.registerTool;
|
||||||
|
|
||||||
|
const wasShellToolRegistered = (registerToolMock as Mock).mock.calls.some(
|
||||||
|
(call) => call[0] instanceof vi.mocked(ShellTool),
|
||||||
|
);
|
||||||
|
expect(wasShellToolRegistered).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should register a tool if coreTools contains the displayName with argument-specific pattern', async () => {
|
||||||
|
const params: ConfigParameters = {
|
||||||
|
...baseParams,
|
||||||
|
coreTools: ['Shell(git status)'],
|
||||||
|
};
|
||||||
|
const config = new Config(params);
|
||||||
|
await config.initialize();
|
||||||
|
|
||||||
|
const registerToolMock = (
|
||||||
|
(await vi.importMock('../tools/tool-registry')) as {
|
||||||
|
ToolRegistry: { prototype: { registerTool: Mock } };
|
||||||
|
}
|
||||||
|
).ToolRegistry.prototype.registerTool;
|
||||||
|
|
||||||
|
const wasShellToolRegistered = (registerToolMock as Mock).mock.calls.some(
|
||||||
|
(call) => call[0] instanceof vi.mocked(ShellTool),
|
||||||
|
);
|
||||||
|
expect(wasShellToolRegistered).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should register a tool if coreTools contains a legacy tool name alias', async () => {
|
||||||
|
const params: ConfigParameters = {
|
||||||
|
...baseParams,
|
||||||
|
useRipgrep: false,
|
||||||
|
coreTools: ['search_file_content'],
|
||||||
|
};
|
||||||
|
const config = new Config(params);
|
||||||
|
await config.initialize();
|
||||||
|
|
||||||
|
const registerToolMock = (
|
||||||
|
(await vi.importMock('../tools/tool-registry')) as {
|
||||||
|
ToolRegistry: { prototype: { registerTool: Mock } };
|
||||||
|
}
|
||||||
|
).ToolRegistry.prototype.registerTool;
|
||||||
|
|
||||||
|
const wasGrepToolRegistered = (registerToolMock as Mock).mock.calls.some(
|
||||||
|
(call) => call[0] instanceof vi.mocked(GrepTool),
|
||||||
|
);
|
||||||
|
expect(wasGrepToolRegistered).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not register a tool if excludeTools contains a legacy display name alias', async () => {
|
||||||
|
const params: ConfigParameters = {
|
||||||
|
...baseParams,
|
||||||
|
useRipgrep: false,
|
||||||
|
coreTools: undefined,
|
||||||
|
excludeTools: ['SearchFiles'],
|
||||||
|
};
|
||||||
|
const config = new Config(params);
|
||||||
|
await config.initialize();
|
||||||
|
|
||||||
|
const registerToolMock = (
|
||||||
|
(await vi.importMock('../tools/tool-registry')) as {
|
||||||
|
ToolRegistry: { prototype: { registerTool: Mock } };
|
||||||
|
}
|
||||||
|
).ToolRegistry.prototype.registerTool;
|
||||||
|
|
||||||
|
const wasGrepToolRegistered = (registerToolMock as Mock).mock.calls.some(
|
||||||
|
(call) => call[0] instanceof vi.mocked(GrepTool),
|
||||||
|
);
|
||||||
|
expect(wasGrepToolRegistered).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
describe('with minified tool class names', () => {
|
describe('with minified tool class names', () => {
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
Object.defineProperty(
|
Object.defineProperty(
|
||||||
@@ -630,7 +781,27 @@ describe('Server Config (config.ts)', () => {
|
|||||||
it('should register a tool if coreTools contains the non-minified class name', async () => {
|
it('should register a tool if coreTools contains the non-minified class name', async () => {
|
||||||
const params: ConfigParameters = {
|
const params: ConfigParameters = {
|
||||||
...baseParams,
|
...baseParams,
|
||||||
coreTools: ['ShellTool'],
|
coreTools: ['Shell'], // Use display name instead of class name
|
||||||
|
};
|
||||||
|
const config = new Config(params);
|
||||||
|
await config.initialize();
|
||||||
|
|
||||||
|
const registerToolMock = (
|
||||||
|
(await vi.importMock('../tools/tool-registry')) as {
|
||||||
|
ToolRegistry: { prototype: { registerTool: Mock } };
|
||||||
|
}
|
||||||
|
).ToolRegistry.prototype.registerTool;
|
||||||
|
|
||||||
|
const wasShellToolRegistered = (
|
||||||
|
registerToolMock as Mock
|
||||||
|
).mock.calls.some((call) => call[0] instanceof vi.mocked(ShellTool));
|
||||||
|
expect(wasShellToolRegistered).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should register a tool if coreTools contains the displayName', async () => {
|
||||||
|
const params: ConfigParameters = {
|
||||||
|
...baseParams,
|
||||||
|
coreTools: ['Shell'],
|
||||||
};
|
};
|
||||||
const config = new Config(params);
|
const config = new Config(params);
|
||||||
await config.initialize();
|
await config.initialize();
|
||||||
@@ -651,7 +822,28 @@ describe('Server Config (config.ts)', () => {
|
|||||||
const params: ConfigParameters = {
|
const params: ConfigParameters = {
|
||||||
...baseParams,
|
...baseParams,
|
||||||
coreTools: undefined, // all tools enabled by default
|
coreTools: undefined, // all tools enabled by default
|
||||||
excludeTools: ['ShellTool'],
|
excludeTools: ['Shell'], // Use display name instead of class name
|
||||||
|
};
|
||||||
|
const config = new Config(params);
|
||||||
|
await config.initialize();
|
||||||
|
|
||||||
|
const registerToolMock = (
|
||||||
|
(await vi.importMock('../tools/tool-registry')) as {
|
||||||
|
ToolRegistry: { prototype: { registerTool: Mock } };
|
||||||
|
}
|
||||||
|
).ToolRegistry.prototype.registerTool;
|
||||||
|
|
||||||
|
const wasShellToolRegistered = (
|
||||||
|
registerToolMock as Mock
|
||||||
|
).mock.calls.some((call) => call[0] instanceof vi.mocked(ShellTool));
|
||||||
|
expect(wasShellToolRegistered).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not register a tool if excludeTools contains the displayName', async () => {
|
||||||
|
const params: ConfigParameters = {
|
||||||
|
...baseParams,
|
||||||
|
coreTools: undefined, // all tools enabled by default
|
||||||
|
excludeTools: ['Shell'],
|
||||||
};
|
};
|
||||||
const config = new Config(params);
|
const config = new Config(params);
|
||||||
await config.initialize();
|
await config.initialize();
|
||||||
@@ -671,7 +863,27 @@ describe('Server Config (config.ts)', () => {
|
|||||||
it('should register a tool if coreTools contains an argument-specific pattern with the non-minified class name', async () => {
|
it('should register a tool if coreTools contains an argument-specific pattern with the non-minified class name', async () => {
|
||||||
const params: ConfigParameters = {
|
const params: ConfigParameters = {
|
||||||
...baseParams,
|
...baseParams,
|
||||||
coreTools: ['ShellTool(git status)'],
|
coreTools: ['Shell(git status)'], // Use display name instead of class name
|
||||||
|
};
|
||||||
|
const config = new Config(params);
|
||||||
|
await config.initialize();
|
||||||
|
|
||||||
|
const registerToolMock = (
|
||||||
|
(await vi.importMock('../tools/tool-registry')) as {
|
||||||
|
ToolRegistry: { prototype: { registerTool: Mock } };
|
||||||
|
}
|
||||||
|
).ToolRegistry.prototype.registerTool;
|
||||||
|
|
||||||
|
const wasShellToolRegistered = (
|
||||||
|
registerToolMock as Mock
|
||||||
|
).mock.calls.some((call) => call[0] instanceof vi.mocked(ShellTool));
|
||||||
|
expect(wasShellToolRegistered).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should register a tool if coreTools contains an argument-specific pattern with the displayName', async () => {
|
||||||
|
const params: ConfigParameters = {
|
||||||
|
...baseParams,
|
||||||
|
coreTools: ['Shell(git status)'],
|
||||||
};
|
};
|
||||||
const config = new Config(params);
|
const config = new Config(params);
|
||||||
await config.initialize();
|
await config.initialize();
|
||||||
@@ -697,13 +909,13 @@ describe('Server Config (config.ts)', () => {
|
|||||||
|
|
||||||
it('should return the calculated threshold when it is smaller than the default', () => {
|
it('should return the calculated threshold when it is smaller than the default', () => {
|
||||||
const config = new Config(baseParams);
|
const config = new Config(baseParams);
|
||||||
vi.mocked(tokenLimit).mockReturnValue(32000);
|
vi.mocked(tokenLimit).mockReturnValue(8000);
|
||||||
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(
|
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(
|
||||||
1000,
|
2000,
|
||||||
);
|
);
|
||||||
// 4 * (32000 - 1000) = 4 * 31000 = 124000
|
// 4 * (8000 - 2000) = 4 * 6000 = 24000
|
||||||
// default is 4_000_000
|
// default is 25_000
|
||||||
expect(config.getTruncateToolOutputThreshold()).toBe(124000);
|
expect(config.getTruncateToolOutputThreshold()).toBe(24000);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should return the default threshold when the calculated value is larger', () => {
|
it('should return the default threshold when the calculated value is larger', () => {
|
||||||
@@ -713,8 +925,8 @@ describe('Server Config (config.ts)', () => {
|
|||||||
500_000,
|
500_000,
|
||||||
);
|
);
|
||||||
// 4 * (2_000_000 - 500_000) = 4 * 1_500_000 = 6_000_000
|
// 4 * (2_000_000 - 500_000) = 4 * 1_500_000 = 6_000_000
|
||||||
// default is 4_000_000
|
// default is 25_000
|
||||||
expect(config.getTruncateToolOutputThreshold()).toBe(4_000_000);
|
expect(config.getTruncateToolOutputThreshold()).toBe(25_000);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should use a custom truncateToolOutputThreshold if provided', () => {
|
it('should use a custom truncateToolOutputThreshold if provided', () => {
|
||||||
@@ -823,10 +1035,60 @@ describe('setApprovalMode with folder trust', () => {
|
|||||||
|
|
||||||
expect(wasRipGrepRegistered).toBe(true);
|
expect(wasRipGrepRegistered).toBe(true);
|
||||||
expect(wasGrepRegistered).toBe(false);
|
expect(wasGrepRegistered).toBe(false);
|
||||||
expect(logRipgrepFallback).not.toHaveBeenCalled();
|
expect(canUseRipgrep).toHaveBeenCalledWith(true);
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should register GrepTool as a fallback when useRipgrep is true but it is not available', async () => {
|
it('should register RipGrepTool with system ripgrep when useBuiltinRipgrep is false', async () => {
|
||||||
|
(canUseRipgrep as Mock).mockResolvedValue(true);
|
||||||
|
const config = new Config({
|
||||||
|
...baseParams,
|
||||||
|
useRipgrep: true,
|
||||||
|
useBuiltinRipgrep: false,
|
||||||
|
});
|
||||||
|
await config.initialize();
|
||||||
|
|
||||||
|
const calls = (ToolRegistry.prototype.registerTool as Mock).mock.calls;
|
||||||
|
const wasRipGrepRegistered = calls.some(
|
||||||
|
(call) => call[0] instanceof vi.mocked(RipGrepTool),
|
||||||
|
);
|
||||||
|
const wasGrepRegistered = calls.some(
|
||||||
|
(call) => call[0] instanceof vi.mocked(GrepTool),
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(wasRipGrepRegistered).toBe(true);
|
||||||
|
expect(wasGrepRegistered).toBe(false);
|
||||||
|
expect(canUseRipgrep).toHaveBeenCalledWith(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should fall back to GrepTool and log error when useBuiltinRipgrep is false but system ripgrep is not available', async () => {
|
||||||
|
(canUseRipgrep as Mock).mockResolvedValue(false);
|
||||||
|
const config = new Config({
|
||||||
|
...baseParams,
|
||||||
|
useRipgrep: true,
|
||||||
|
useBuiltinRipgrep: false,
|
||||||
|
});
|
||||||
|
await config.initialize();
|
||||||
|
|
||||||
|
const calls = (ToolRegistry.prototype.registerTool as Mock).mock.calls;
|
||||||
|
const wasRipGrepRegistered = calls.some(
|
||||||
|
(call) => call[0] instanceof vi.mocked(RipGrepTool),
|
||||||
|
);
|
||||||
|
const wasGrepRegistered = calls.some(
|
||||||
|
(call) => call[0] instanceof vi.mocked(GrepTool),
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(wasRipGrepRegistered).toBe(false);
|
||||||
|
expect(wasGrepRegistered).toBe(true);
|
||||||
|
expect(canUseRipgrep).toHaveBeenCalledWith(false);
|
||||||
|
expect(logRipgrepFallback).toHaveBeenCalledWith(
|
||||||
|
config,
|
||||||
|
expect.any(RipgrepFallbackEvent),
|
||||||
|
);
|
||||||
|
const event = (logRipgrepFallback as Mock).mock.calls[0][1];
|
||||||
|
expect(event.error).toContain('Ripgrep is not available');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should fall back to GrepTool and log error when useRipgrep is true and builtin ripgrep is not available', async () => {
|
||||||
(canUseRipgrep as Mock).mockResolvedValue(false);
|
(canUseRipgrep as Mock).mockResolvedValue(false);
|
||||||
const config = new Config({ ...baseParams, useRipgrep: true });
|
const config = new Config({ ...baseParams, useRipgrep: true });
|
||||||
await config.initialize();
|
await config.initialize();
|
||||||
@@ -841,15 +1103,16 @@ describe('setApprovalMode with folder trust', () => {
|
|||||||
|
|
||||||
expect(wasRipGrepRegistered).toBe(false);
|
expect(wasRipGrepRegistered).toBe(false);
|
||||||
expect(wasGrepRegistered).toBe(true);
|
expect(wasGrepRegistered).toBe(true);
|
||||||
|
expect(canUseRipgrep).toHaveBeenCalledWith(true);
|
||||||
expect(logRipgrepFallback).toHaveBeenCalledWith(
|
expect(logRipgrepFallback).toHaveBeenCalledWith(
|
||||||
config,
|
config,
|
||||||
expect.any(RipgrepFallbackEvent),
|
expect.any(RipgrepFallbackEvent),
|
||||||
);
|
);
|
||||||
const event = (logRipgrepFallback as Mock).mock.calls[0][1];
|
const event = (logRipgrepFallback as Mock).mock.calls[0][1];
|
||||||
expect(event.error).toBeUndefined();
|
expect(event.error).toContain('Ripgrep is not available');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should register GrepTool as a fallback when canUseRipgrep throws an error', async () => {
|
it('should fall back to GrepTool and log error when canUseRipgrep throws an error', async () => {
|
||||||
const error = new Error('ripGrep check failed');
|
const error = new Error('ripGrep check failed');
|
||||||
(canUseRipgrep as Mock).mockRejectedValue(error);
|
(canUseRipgrep as Mock).mockRejectedValue(error);
|
||||||
const config = new Config({ ...baseParams, useRipgrep: true });
|
const config = new Config({ ...baseParams, useRipgrep: true });
|
||||||
@@ -888,7 +1151,6 @@ describe('setApprovalMode with folder trust', () => {
|
|||||||
expect(wasRipGrepRegistered).toBe(false);
|
expect(wasRipGrepRegistered).toBe(false);
|
||||||
expect(wasGrepRegistered).toBe(true);
|
expect(wasGrepRegistered).toBe(true);
|
||||||
expect(canUseRipgrep).not.toHaveBeenCalled();
|
expect(canUseRipgrep).not.toHaveBeenCalled();
|
||||||
expect(logRipgrepFallback).not.toHaveBeenCalled();
|
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -57,7 +57,7 @@ import { TaskTool } from '../tools/task.js';
|
|||||||
import { TodoWriteTool } from '../tools/todoWrite.js';
|
import { TodoWriteTool } from '../tools/todoWrite.js';
|
||||||
import { ToolRegistry } from '../tools/tool-registry.js';
|
import { ToolRegistry } from '../tools/tool-registry.js';
|
||||||
import { WebFetchTool } from '../tools/web-fetch.js';
|
import { WebFetchTool } from '../tools/web-fetch.js';
|
||||||
import { WebSearchTool } from '../tools/web-search.js';
|
import { WebSearchTool } from '../tools/web-search/index.js';
|
||||||
import { WriteFileTool } from '../tools/write-file.js';
|
import { WriteFileTool } from '../tools/write-file.js';
|
||||||
|
|
||||||
// Other modules
|
// Other modules
|
||||||
@@ -81,6 +81,7 @@ import {
|
|||||||
import { shouldAttemptBrowserLaunch } from '../utils/browser.js';
|
import { shouldAttemptBrowserLaunch } from '../utils/browser.js';
|
||||||
import { FileExclusions } from '../utils/ignorePatterns.js';
|
import { FileExclusions } from '../utils/ignorePatterns.js';
|
||||||
import { WorkspaceContext } from '../utils/workspaceContext.js';
|
import { WorkspaceContext } from '../utils/workspaceContext.js';
|
||||||
|
import { isToolEnabled, type ToolName } from '../utils/tool-utils.js';
|
||||||
|
|
||||||
// Local config modules
|
// Local config modules
|
||||||
import type { FileFilteringOptions } from './constants.js';
|
import type { FileFilteringOptions } from './constants.js';
|
||||||
@@ -88,8 +89,9 @@ import {
|
|||||||
DEFAULT_FILE_FILTERING_OPTIONS,
|
DEFAULT_FILE_FILTERING_OPTIONS,
|
||||||
DEFAULT_MEMORY_FILE_FILTERING_OPTIONS,
|
DEFAULT_MEMORY_FILE_FILTERING_OPTIONS,
|
||||||
} from './constants.js';
|
} from './constants.js';
|
||||||
import { DEFAULT_QWEN_EMBEDDING_MODEL } from './models.js';
|
import { DEFAULT_QWEN_EMBEDDING_MODEL, DEFAULT_QWEN_MODEL } from './models.js';
|
||||||
import { Storage } from './storage.js';
|
import { Storage } from './storage.js';
|
||||||
|
import { DEFAULT_DASHSCOPE_BASE_URL } from '../core/openaiContentGenerator/constants.js';
|
||||||
|
|
||||||
// Re-export types
|
// Re-export types
|
||||||
export type { AnyToolInvocation, FileFilteringOptions, MCPOAuthConfig };
|
export type { AnyToolInvocation, FileFilteringOptions, MCPOAuthConfig };
|
||||||
@@ -160,7 +162,7 @@ export interface ExtensionInstallMetadata {
|
|||||||
autoUpdate?: boolean;
|
autoUpdate?: boolean;
|
||||||
}
|
}
|
||||||
|
|
||||||
export const DEFAULT_TRUNCATE_TOOL_OUTPUT_THRESHOLD = 4_000_000;
|
export const DEFAULT_TRUNCATE_TOOL_OUTPUT_THRESHOLD = 25_000;
|
||||||
export const DEFAULT_TRUNCATE_TOOL_OUTPUT_LINES = 1000;
|
export const DEFAULT_TRUNCATE_TOOL_OUTPUT_LINES = 1000;
|
||||||
|
|
||||||
export class MCPServerConfig {
|
export class MCPServerConfig {
|
||||||
@@ -243,7 +245,7 @@ export interface ConfigParameters {
|
|||||||
fileDiscoveryService?: FileDiscoveryService;
|
fileDiscoveryService?: FileDiscoveryService;
|
||||||
includeDirectories?: string[];
|
includeDirectories?: string[];
|
||||||
bugCommand?: BugCommandSettings;
|
bugCommand?: BugCommandSettings;
|
||||||
model: string;
|
model?: string;
|
||||||
extensionContextFilePaths?: string[];
|
extensionContextFilePaths?: string[];
|
||||||
maxSessionTurns?: number;
|
maxSessionTurns?: number;
|
||||||
sessionTokenLimit?: number;
|
sessionTokenLimit?: number;
|
||||||
@@ -261,11 +263,19 @@ export interface ConfigParameters {
|
|||||||
cliVersion?: string;
|
cliVersion?: string;
|
||||||
loadMemoryFromIncludeDirectories?: boolean;
|
loadMemoryFromIncludeDirectories?: boolean;
|
||||||
// Web search providers
|
// Web search providers
|
||||||
tavilyApiKey?: string;
|
webSearch?: {
|
||||||
|
provider: Array<{
|
||||||
|
type: 'tavily' | 'google' | 'dashscope';
|
||||||
|
apiKey?: string;
|
||||||
|
searchEngineId?: string;
|
||||||
|
}>;
|
||||||
|
default: string;
|
||||||
|
};
|
||||||
chatCompression?: ChatCompressionSettings;
|
chatCompression?: ChatCompressionSettings;
|
||||||
interactive?: boolean;
|
interactive?: boolean;
|
||||||
trustedFolder?: boolean;
|
trustedFolder?: boolean;
|
||||||
useRipgrep?: boolean;
|
useRipgrep?: boolean;
|
||||||
|
useBuiltinRipgrep?: boolean;
|
||||||
shouldUseNodePtyShell?: boolean;
|
shouldUseNodePtyShell?: boolean;
|
||||||
skipNextSpeakerCheck?: boolean;
|
skipNextSpeakerCheck?: boolean;
|
||||||
shellExecutionConfig?: ShellExecutionConfig;
|
shellExecutionConfig?: ShellExecutionConfig;
|
||||||
@@ -279,6 +289,7 @@ export interface ConfigParameters {
|
|||||||
eventEmitter?: EventEmitter;
|
eventEmitter?: EventEmitter;
|
||||||
useSmartEdit?: boolean;
|
useSmartEdit?: boolean;
|
||||||
output?: OutputSettings;
|
output?: OutputSettings;
|
||||||
|
skipStartupContext?: boolean;
|
||||||
}
|
}
|
||||||
|
|
||||||
export class Config {
|
export class Config {
|
||||||
@@ -289,7 +300,7 @@ export class Config {
|
|||||||
private fileSystemService: FileSystemService;
|
private fileSystemService: FileSystemService;
|
||||||
private contentGeneratorConfig!: ContentGeneratorConfig;
|
private contentGeneratorConfig!: ContentGeneratorConfig;
|
||||||
private contentGenerator!: ContentGenerator;
|
private contentGenerator!: ContentGenerator;
|
||||||
private readonly _generationConfig: ContentGeneratorConfig;
|
private _generationConfig: Partial<ContentGeneratorConfig>;
|
||||||
private readonly embeddingModel: string;
|
private readonly embeddingModel: string;
|
||||||
private readonly sandbox: SandboxConfig | undefined;
|
private readonly sandbox: SandboxConfig | undefined;
|
||||||
private readonly targetDir: string;
|
private readonly targetDir: string;
|
||||||
@@ -349,17 +360,26 @@ export class Config {
|
|||||||
private readonly cliVersion?: string;
|
private readonly cliVersion?: string;
|
||||||
private readonly experimentalZedIntegration: boolean = false;
|
private readonly experimentalZedIntegration: boolean = false;
|
||||||
private readonly loadMemoryFromIncludeDirectories: boolean = false;
|
private readonly loadMemoryFromIncludeDirectories: boolean = false;
|
||||||
private readonly tavilyApiKey?: string;
|
private readonly webSearch?: {
|
||||||
|
provider: Array<{
|
||||||
|
type: 'tavily' | 'google' | 'dashscope';
|
||||||
|
apiKey?: string;
|
||||||
|
searchEngineId?: string;
|
||||||
|
}>;
|
||||||
|
default: string;
|
||||||
|
};
|
||||||
private readonly chatCompression: ChatCompressionSettings | undefined;
|
private readonly chatCompression: ChatCompressionSettings | undefined;
|
||||||
private readonly interactive: boolean;
|
private readonly interactive: boolean;
|
||||||
private readonly trustedFolder: boolean | undefined;
|
private readonly trustedFolder: boolean | undefined;
|
||||||
private readonly useRipgrep: boolean;
|
private readonly useRipgrep: boolean;
|
||||||
|
private readonly useBuiltinRipgrep: boolean;
|
||||||
private readonly shouldUseNodePtyShell: boolean;
|
private readonly shouldUseNodePtyShell: boolean;
|
||||||
private readonly skipNextSpeakerCheck: boolean;
|
private readonly skipNextSpeakerCheck: boolean;
|
||||||
private shellExecutionConfig: ShellExecutionConfig;
|
private shellExecutionConfig: ShellExecutionConfig;
|
||||||
private readonly extensionManagement: boolean = true;
|
private readonly extensionManagement: boolean = true;
|
||||||
private readonly enablePromptCompletion: boolean = false;
|
private readonly enablePromptCompletion: boolean = false;
|
||||||
private readonly skipLoopDetection: boolean;
|
private readonly skipLoopDetection: boolean;
|
||||||
|
private readonly skipStartupContext: boolean;
|
||||||
private readonly vlmSwitchMode: string | undefined;
|
private readonly vlmSwitchMode: string | undefined;
|
||||||
private initialized: boolean = false;
|
private initialized: boolean = false;
|
||||||
readonly storage: Storage;
|
readonly storage: Storage;
|
||||||
@@ -440,8 +460,10 @@ export class Config {
|
|||||||
this._generationConfig = {
|
this._generationConfig = {
|
||||||
model: params.model,
|
model: params.model,
|
||||||
...(params.generationConfig || {}),
|
...(params.generationConfig || {}),
|
||||||
|
baseUrl: params.generationConfig?.baseUrl || DEFAULT_DASHSCOPE_BASE_URL,
|
||||||
};
|
};
|
||||||
this.contentGeneratorConfig = this._generationConfig;
|
this.contentGeneratorConfig = this
|
||||||
|
._generationConfig as ContentGeneratorConfig;
|
||||||
this.cliVersion = params.cliVersion;
|
this.cliVersion = params.cliVersion;
|
||||||
|
|
||||||
this.loadMemoryFromIncludeDirectories =
|
this.loadMemoryFromIncludeDirectories =
|
||||||
@@ -449,13 +471,13 @@ export class Config {
|
|||||||
this.chatCompression = params.chatCompression;
|
this.chatCompression = params.chatCompression;
|
||||||
this.interactive = params.interactive ?? false;
|
this.interactive = params.interactive ?? false;
|
||||||
this.trustedFolder = params.trustedFolder;
|
this.trustedFolder = params.trustedFolder;
|
||||||
this.shouldUseNodePtyShell = params.shouldUseNodePtyShell ?? false;
|
|
||||||
this.skipNextSpeakerCheck = params.skipNextSpeakerCheck ?? false;
|
|
||||||
this.skipLoopDetection = params.skipLoopDetection ?? false;
|
this.skipLoopDetection = params.skipLoopDetection ?? false;
|
||||||
|
this.skipStartupContext = params.skipStartupContext ?? false;
|
||||||
|
|
||||||
// Web search
|
// Web search
|
||||||
this.tavilyApiKey = params.tavilyApiKey;
|
this.webSearch = params.webSearch;
|
||||||
this.useRipgrep = params.useRipgrep ?? true;
|
this.useRipgrep = params.useRipgrep ?? true;
|
||||||
|
this.useBuiltinRipgrep = params.useBuiltinRipgrep ?? true;
|
||||||
this.shouldUseNodePtyShell = params.shouldUseNodePtyShell ?? false;
|
this.shouldUseNodePtyShell = params.shouldUseNodePtyShell ?? false;
|
||||||
this.skipNextSpeakerCheck = params.skipNextSpeakerCheck ?? true;
|
this.skipNextSpeakerCheck = params.skipNextSpeakerCheck ?? true;
|
||||||
this.shellExecutionConfig = {
|
this.shellExecutionConfig = {
|
||||||
@@ -520,6 +542,26 @@ export class Config {
|
|||||||
return this.contentGenerator;
|
return this.contentGenerator;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Updates the credentials in the generation config.
|
||||||
|
* This is needed when credentials are set after Config construction.
|
||||||
|
*/
|
||||||
|
updateCredentials(credentials: {
|
||||||
|
apiKey?: string;
|
||||||
|
baseUrl?: string;
|
||||||
|
model?: string;
|
||||||
|
}): void {
|
||||||
|
if (credentials.apiKey) {
|
||||||
|
this._generationConfig.apiKey = credentials.apiKey;
|
||||||
|
}
|
||||||
|
if (credentials.baseUrl) {
|
||||||
|
this._generationConfig.baseUrl = credentials.baseUrl;
|
||||||
|
}
|
||||||
|
if (credentials.model) {
|
||||||
|
this._generationConfig.model = credentials.model;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
async refreshAuth(authMethod: AuthType) {
|
async refreshAuth(authMethod: AuthType) {
|
||||||
// Vertex and Genai have incompatible encryption and sending history with
|
// Vertex and Genai have incompatible encryption and sending history with
|
||||||
// throughtSignature from Genai to Vertex will fail, we need to strip them
|
// throughtSignature from Genai to Vertex will fail, we need to strip them
|
||||||
@@ -587,7 +629,7 @@ export class Config {
|
|||||||
}
|
}
|
||||||
|
|
||||||
getModel(): string {
|
getModel(): string {
|
||||||
return this.contentGeneratorConfig.model;
|
return this.contentGeneratorConfig?.model || DEFAULT_QWEN_MODEL;
|
||||||
}
|
}
|
||||||
|
|
||||||
async setModel(
|
async setModel(
|
||||||
@@ -888,8 +930,8 @@ export class Config {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Web search provider configuration
|
// Web search provider configuration
|
||||||
getTavilyApiKey(): string | undefined {
|
getWebSearchConfig() {
|
||||||
return this.tavilyApiKey;
|
return this.webSearch;
|
||||||
}
|
}
|
||||||
|
|
||||||
getIdeMode(): boolean {
|
getIdeMode(): boolean {
|
||||||
@@ -965,6 +1007,10 @@ export class Config {
|
|||||||
return this.useRipgrep;
|
return this.useRipgrep;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
getUseBuiltinRipgrep(): boolean {
|
||||||
|
return this.useBuiltinRipgrep;
|
||||||
|
}
|
||||||
|
|
||||||
getShouldUseNodePtyShell(): boolean {
|
getShouldUseNodePtyShell(): boolean {
|
||||||
return this.shouldUseNodePtyShell;
|
return this.shouldUseNodePtyShell;
|
||||||
}
|
}
|
||||||
@@ -999,6 +1045,10 @@ export class Config {
|
|||||||
return this.skipLoopDetection;
|
return this.skipLoopDetection;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
getSkipStartupContext(): boolean {
|
||||||
|
return this.skipStartupContext;
|
||||||
|
}
|
||||||
|
|
||||||
getVlmSwitchMode(): string | undefined {
|
getVlmSwitchMode(): string | undefined {
|
||||||
return this.vlmSwitchMode;
|
return this.vlmSwitchMode;
|
||||||
}
|
}
|
||||||
@@ -1008,6 +1058,13 @@ export class Config {
|
|||||||
}
|
}
|
||||||
|
|
||||||
getTruncateToolOutputThreshold(): number {
|
getTruncateToolOutputThreshold(): number {
|
||||||
|
if (
|
||||||
|
!this.enableToolOutputTruncation ||
|
||||||
|
this.truncateToolOutputThreshold <= 0
|
||||||
|
) {
|
||||||
|
return Number.POSITIVE_INFINITY;
|
||||||
|
}
|
||||||
|
|
||||||
return Math.min(
|
return Math.min(
|
||||||
// Estimate remaining context window in characters (1 token ~= 4 chars).
|
// Estimate remaining context window in characters (1 token ~= 4 chars).
|
||||||
4 *
|
4 *
|
||||||
@@ -1018,6 +1075,10 @@ export class Config {
|
|||||||
}
|
}
|
||||||
|
|
||||||
getTruncateToolOutputLines(): number {
|
getTruncateToolOutputLines(): number {
|
||||||
|
if (!this.enableToolOutputTruncation || this.truncateToolOutputLines <= 0) {
|
||||||
|
return Number.POSITIVE_INFINITY;
|
||||||
|
}
|
||||||
|
|
||||||
return this.truncateToolOutputLines;
|
return this.truncateToolOutputLines;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1050,37 +1111,35 @@ export class Config {
|
|||||||
async createToolRegistry(): Promise<ToolRegistry> {
|
async createToolRegistry(): Promise<ToolRegistry> {
|
||||||
const registry = new ToolRegistry(this, this.eventEmitter);
|
const registry = new ToolRegistry(this, this.eventEmitter);
|
||||||
|
|
||||||
// helper to create & register core tools that are enabled
|
const coreToolsConfig = this.getCoreTools();
|
||||||
|
const excludeToolsConfig = this.getExcludeTools();
|
||||||
|
|
||||||
|
// Helper to create & register core tools that are enabled
|
||||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||||
const registerCoreTool = (ToolClass: any, ...args: unknown[]) => {
|
const registerCoreTool = (ToolClass: any, ...args: unknown[]) => {
|
||||||
const className = ToolClass.name;
|
const toolName = ToolClass?.Name as ToolName | undefined;
|
||||||
const toolName = ToolClass.Name || className;
|
const className = ToolClass?.name ?? 'UnknownTool';
|
||||||
const coreTools = this.getCoreTools();
|
|
||||||
const excludeTools = this.getExcludeTools() || [];
|
|
||||||
// On some platforms, the className can be minified to _ClassName.
|
|
||||||
const normalizedClassName = className.replace(/^_+/, '');
|
|
||||||
|
|
||||||
let isEnabled = true; // Enabled by default if coreTools is not set.
|
if (!toolName) {
|
||||||
if (coreTools) {
|
// Log warning and skip this tool instead of crashing
|
||||||
isEnabled = coreTools.some(
|
console.warn(
|
||||||
(tool) =>
|
`[Config] Skipping tool registration: ${className} is missing static Name property. ` +
|
||||||
tool === toolName ||
|
`Tools must define a static Name property to be registered. ` +
|
||||||
tool === normalizedClassName ||
|
`Location: config.ts:registerCoreTool`,
|
||||||
tool.startsWith(`${toolName}(`) ||
|
|
||||||
tool.startsWith(`${normalizedClassName}(`),
|
|
||||||
);
|
);
|
||||||
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
const isExcluded = excludeTools.some(
|
if (isToolEnabled(toolName, coreToolsConfig, excludeToolsConfig)) {
|
||||||
(tool) => tool === toolName || tool === normalizedClassName,
|
try {
|
||||||
);
|
registry.registerTool(new ToolClass(...args));
|
||||||
|
} catch (error) {
|
||||||
if (isExcluded) {
|
console.error(
|
||||||
isEnabled = false;
|
`[Config] Failed to register tool ${className} (${toolName}):`,
|
||||||
}
|
error,
|
||||||
|
);
|
||||||
if (isEnabled) {
|
throw error; // Re-throw after logging context
|
||||||
registry.registerTool(new ToolClass(...args));
|
}
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -1092,13 +1151,18 @@ export class Config {
|
|||||||
let useRipgrep = false;
|
let useRipgrep = false;
|
||||||
let errorString: undefined | string = undefined;
|
let errorString: undefined | string = undefined;
|
||||||
try {
|
try {
|
||||||
useRipgrep = await canUseRipgrep();
|
useRipgrep = await canUseRipgrep(this.getUseBuiltinRipgrep());
|
||||||
} catch (error: unknown) {
|
} catch (error: unknown) {
|
||||||
errorString = String(error);
|
errorString = String(error);
|
||||||
}
|
}
|
||||||
if (useRipgrep) {
|
if (useRipgrep) {
|
||||||
registerCoreTool(RipGrepTool, this);
|
registerCoreTool(RipGrepTool, this);
|
||||||
} else {
|
} else {
|
||||||
|
errorString =
|
||||||
|
errorString ||
|
||||||
|
'Ripgrep is not available. Please install ripgrep globally.';
|
||||||
|
|
||||||
|
// Log for telemetry
|
||||||
logRipgrepFallback(this, new RipgrepFallbackEvent(errorString));
|
logRipgrepFallback(this, new RipgrepFallbackEvent(errorString));
|
||||||
registerCoreTool(GrepTool, this);
|
registerCoreTool(GrepTool, this);
|
||||||
}
|
}
|
||||||
@@ -1119,8 +1183,10 @@ export class Config {
|
|||||||
registerCoreTool(TodoWriteTool, this);
|
registerCoreTool(TodoWriteTool, this);
|
||||||
registerCoreTool(ExitPlanModeTool, this);
|
registerCoreTool(ExitPlanModeTool, this);
|
||||||
registerCoreTool(WebFetchTool, this);
|
registerCoreTool(WebFetchTool, this);
|
||||||
// Conditionally register web search tool only if Tavily API key is set
|
// Conditionally register web search tool if web search provider is configured
|
||||||
if (this.getTavilyApiKey()) {
|
// buildWebSearchConfig ensures qwen-oauth users get dashscope provider, so
|
||||||
|
// if tool is registered, config must exist
|
||||||
|
if (this.getWebSearchConfig()) {
|
||||||
registerCoreTool(WebSearchTool, this);
|
registerCoreTool(WebSearchTool, this);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -69,7 +69,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
@@ -288,7 +288,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
@@ -517,7 +517,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
@@ -731,7 +731,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
@@ -945,7 +945,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
@@ -1159,7 +1159,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
@@ -1373,7 +1373,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
@@ -1587,7 +1587,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
@@ -1801,7 +1801,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
@@ -2015,7 +2015,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
@@ -2252,7 +2252,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
@@ -2549,7 +2549,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
@@ -2786,7 +2786,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
@@ -3079,7 +3079,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
@@ -3293,7 +3293,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
|||||||
## Software Engineering Tasks
|
## Software Engineering Tasks
|
||||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||||
|
|||||||
@@ -21,6 +21,9 @@ vi.mock('../../telemetry/loggers.js', () => ({
|
|||||||
}));
|
}));
|
||||||
|
|
||||||
vi.mock('../../utils/openaiLogger.js', () => ({
|
vi.mock('../../utils/openaiLogger.js', () => ({
|
||||||
|
OpenAILogger: vi.fn().mockImplementation(() => ({
|
||||||
|
logInteraction: vi.fn(),
|
||||||
|
})),
|
||||||
openaiLogger: {
|
openaiLogger: {
|
||||||
logInteraction: vi.fn(),
|
logInteraction: vi.fn(),
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -16,11 +16,11 @@ import {
|
|||||||
|
|
||||||
import type { Content, GenerateContentResponse, Part } from '@google/genai';
|
import type { Content, GenerateContentResponse, Part } from '@google/genai';
|
||||||
import {
|
import {
|
||||||
findCompressSplitPoint,
|
|
||||||
isThinkingDefault,
|
isThinkingDefault,
|
||||||
isThinkingSupported,
|
isThinkingSupported,
|
||||||
GeminiClient,
|
GeminiClient,
|
||||||
} from './client.js';
|
} from './client.js';
|
||||||
|
import { findCompressSplitPoint } from '../services/chatCompressionService.js';
|
||||||
import {
|
import {
|
||||||
AuthType,
|
AuthType,
|
||||||
type ContentGenerator,
|
type ContentGenerator,
|
||||||
@@ -42,7 +42,6 @@ import { setSimulate429 } from '../utils/testUtils.js';
|
|||||||
import { tokenLimit } from './tokenLimits.js';
|
import { tokenLimit } from './tokenLimits.js';
|
||||||
import { ideContextStore } from '../ide/ideContext.js';
|
import { ideContextStore } from '../ide/ideContext.js';
|
||||||
import { uiTelemetryService } from '../telemetry/uiTelemetry.js';
|
import { uiTelemetryService } from '../telemetry/uiTelemetry.js';
|
||||||
import { QwenLogger } from '../telemetry/index.js';
|
|
||||||
|
|
||||||
// Mock fs module to prevent actual file system operations during tests
|
// Mock fs module to prevent actual file system operations during tests
|
||||||
const mockFileSystem = new Map<string, string>();
|
const mockFileSystem = new Map<string, string>();
|
||||||
@@ -101,6 +100,22 @@ vi.mock('../utils/errorReporting', () => ({ reportError: vi.fn() }));
|
|||||||
vi.mock('../utils/nextSpeakerChecker', () => ({
|
vi.mock('../utils/nextSpeakerChecker', () => ({
|
||||||
checkNextSpeaker: vi.fn().mockResolvedValue(null),
|
checkNextSpeaker: vi.fn().mockResolvedValue(null),
|
||||||
}));
|
}));
|
||||||
|
vi.mock('../utils/environmentContext', () => ({
|
||||||
|
getEnvironmentContext: vi
|
||||||
|
.fn()
|
||||||
|
.mockResolvedValue([{ text: 'Mocked env context' }]),
|
||||||
|
getInitialChatHistory: vi.fn(async (_config, extraHistory) => [
|
||||||
|
{
|
||||||
|
role: 'user',
|
||||||
|
parts: [{ text: 'Mocked env context' }],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
role: 'model',
|
||||||
|
parts: [{ text: 'Got it. Thanks for the context!' }],
|
||||||
|
},
|
||||||
|
...(extraHistory ?? []),
|
||||||
|
]),
|
||||||
|
}));
|
||||||
vi.mock('../utils/generateContentResponseUtilities', () => ({
|
vi.mock('../utils/generateContentResponseUtilities', () => ({
|
||||||
getResponseText: (result: GenerateContentResponse) =>
|
getResponseText: (result: GenerateContentResponse) =>
|
||||||
result.candidates?.[0]?.content?.parts?.map((part) => part.text).join('') ||
|
result.candidates?.[0]?.content?.parts?.map((part) => part.text).join('') ||
|
||||||
@@ -136,6 +151,10 @@ vi.mock('../ide/ideContext.js');
|
|||||||
vi.mock('../telemetry/uiTelemetry.js', () => ({
|
vi.mock('../telemetry/uiTelemetry.js', () => ({
|
||||||
uiTelemetryService: mockUiTelemetryService,
|
uiTelemetryService: mockUiTelemetryService,
|
||||||
}));
|
}));
|
||||||
|
vi.mock('../telemetry/loggers.js', () => ({
|
||||||
|
logChatCompression: vi.fn(),
|
||||||
|
logNextSpeakerCheck: vi.fn(),
|
||||||
|
}));
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Array.fromAsync ponyfill, which will be available in es 2024.
|
* Array.fromAsync ponyfill, which will be available in es 2024.
|
||||||
@@ -619,7 +638,8 @@ describe('Gemini Client (client.ts)', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('logs a telemetry event when compressing', async () => {
|
it('logs a telemetry event when compressing', async () => {
|
||||||
vi.spyOn(QwenLogger.prototype, 'logChatCompressionEvent');
|
const { logChatCompression } = await import('../telemetry/loggers.js');
|
||||||
|
vi.mocked(logChatCompression).mockClear();
|
||||||
|
|
||||||
const MOCKED_TOKEN_LIMIT = 1000;
|
const MOCKED_TOKEN_LIMIT = 1000;
|
||||||
const MOCKED_CONTEXT_PERCENTAGE_THRESHOLD = 0.5;
|
const MOCKED_CONTEXT_PERCENTAGE_THRESHOLD = 0.5;
|
||||||
@@ -627,19 +647,37 @@ describe('Gemini Client (client.ts)', () => {
|
|||||||
vi.spyOn(client['config'], 'getChatCompression').mockReturnValue({
|
vi.spyOn(client['config'], 'getChatCompression').mockReturnValue({
|
||||||
contextPercentageThreshold: MOCKED_CONTEXT_PERCENTAGE_THRESHOLD,
|
contextPercentageThreshold: MOCKED_CONTEXT_PERCENTAGE_THRESHOLD,
|
||||||
});
|
});
|
||||||
const history = [{ role: 'user', parts: [{ text: '...history...' }] }];
|
// Need multiple history items so there's something to compress
|
||||||
|
const history = [
|
||||||
|
{ role: 'user', parts: [{ text: '...history 1...' }] },
|
||||||
|
{ role: 'model', parts: [{ text: '...history 2...' }] },
|
||||||
|
{ role: 'user', parts: [{ text: '...history 3...' }] },
|
||||||
|
{ role: 'model', parts: [{ text: '...history 4...' }] },
|
||||||
|
];
|
||||||
mockGetHistory.mockReturnValue(history);
|
mockGetHistory.mockReturnValue(history);
|
||||||
|
|
||||||
|
// Token count needs to be ABOVE the threshold to trigger compression
|
||||||
const originalTokenCount =
|
const originalTokenCount =
|
||||||
MOCKED_TOKEN_LIMIT * MOCKED_CONTEXT_PERCENTAGE_THRESHOLD;
|
MOCKED_TOKEN_LIMIT * MOCKED_CONTEXT_PERCENTAGE_THRESHOLD + 1;
|
||||||
|
|
||||||
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(
|
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(
|
||||||
originalTokenCount,
|
originalTokenCount,
|
||||||
);
|
);
|
||||||
|
|
||||||
// We need to control the estimated new token count.
|
// Mock the summary response from the chat
|
||||||
// We mock startChat to return a chat with a known history.
|
|
||||||
const summaryText = 'This is a summary.';
|
const summaryText = 'This is a summary.';
|
||||||
|
mockGenerateContentFn.mockResolvedValue({
|
||||||
|
candidates: [
|
||||||
|
{
|
||||||
|
content: {
|
||||||
|
role: 'model',
|
||||||
|
parts: [{ text: summaryText }],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
} as unknown as GenerateContentResponse);
|
||||||
|
|
||||||
|
// Mock startChat to complete the compression flow
|
||||||
const splitPoint = findCompressSplitPoint(history, 0.7);
|
const splitPoint = findCompressSplitPoint(history, 0.7);
|
||||||
const historyToKeep = history.slice(splitPoint);
|
const historyToKeep = history.slice(splitPoint);
|
||||||
const newCompressedHistory: Content[] = [
|
const newCompressedHistory: Content[] = [
|
||||||
@@ -659,52 +697,36 @@ describe('Gemini Client (client.ts)', () => {
|
|||||||
.fn()
|
.fn()
|
||||||
.mockResolvedValue(mockNewChat as GeminiChat);
|
.mockResolvedValue(mockNewChat as GeminiChat);
|
||||||
|
|
||||||
const totalChars = newCompressedHistory.reduce(
|
|
||||||
(total, content) => total + JSON.stringify(content).length,
|
|
||||||
0,
|
|
||||||
);
|
|
||||||
const newTokenCount = Math.floor(totalChars / 4);
|
|
||||||
|
|
||||||
// Mock the summary response from the chat
|
|
||||||
mockGenerateContentFn.mockResolvedValue({
|
|
||||||
candidates: [
|
|
||||||
{
|
|
||||||
content: {
|
|
||||||
role: 'model',
|
|
||||||
parts: [{ text: summaryText }],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
],
|
|
||||||
} as unknown as GenerateContentResponse);
|
|
||||||
|
|
||||||
await client.tryCompressChat('prompt-id-3', false);
|
await client.tryCompressChat('prompt-id-3', false);
|
||||||
|
|
||||||
expect(QwenLogger.prototype.logChatCompressionEvent).toHaveBeenCalledWith(
|
expect(logChatCompression).toHaveBeenCalledWith(
|
||||||
|
expect.anything(),
|
||||||
expect.objectContaining({
|
expect.objectContaining({
|
||||||
tokens_before: originalTokenCount,
|
tokens_before: originalTokenCount,
|
||||||
tokens_after: newTokenCount,
|
|
||||||
}),
|
}),
|
||||||
);
|
);
|
||||||
expect(uiTelemetryService.setLastPromptTokenCount).toHaveBeenCalledWith(
|
expect(uiTelemetryService.setLastPromptTokenCount).toHaveBeenCalled();
|
||||||
newTokenCount,
|
|
||||||
);
|
|
||||||
expect(uiTelemetryService.setLastPromptTokenCount).toHaveBeenCalledTimes(
|
|
||||||
1,
|
|
||||||
);
|
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should trigger summarization if token count is at threshold with contextPercentageThreshold setting', async () => {
|
it('should trigger summarization if token count is above threshold with contextPercentageThreshold setting', async () => {
|
||||||
const MOCKED_TOKEN_LIMIT = 1000;
|
const MOCKED_TOKEN_LIMIT = 1000;
|
||||||
const MOCKED_CONTEXT_PERCENTAGE_THRESHOLD = 0.5;
|
const MOCKED_CONTEXT_PERCENTAGE_THRESHOLD = 0.5;
|
||||||
vi.mocked(tokenLimit).mockReturnValue(MOCKED_TOKEN_LIMIT);
|
vi.mocked(tokenLimit).mockReturnValue(MOCKED_TOKEN_LIMIT);
|
||||||
vi.spyOn(client['config'], 'getChatCompression').mockReturnValue({
|
vi.spyOn(client['config'], 'getChatCompression').mockReturnValue({
|
||||||
contextPercentageThreshold: MOCKED_CONTEXT_PERCENTAGE_THRESHOLD,
|
contextPercentageThreshold: MOCKED_CONTEXT_PERCENTAGE_THRESHOLD,
|
||||||
});
|
});
|
||||||
const history = [{ role: 'user', parts: [{ text: '...history...' }] }];
|
// Need multiple history items so there's something to compress
|
||||||
|
const history = [
|
||||||
|
{ role: 'user', parts: [{ text: '...history 1...' }] },
|
||||||
|
{ role: 'model', parts: [{ text: '...history 2...' }] },
|
||||||
|
{ role: 'user', parts: [{ text: '...history 3...' }] },
|
||||||
|
{ role: 'model', parts: [{ text: '...history 4...' }] },
|
||||||
|
];
|
||||||
mockGetHistory.mockReturnValue(history);
|
mockGetHistory.mockReturnValue(history);
|
||||||
|
|
||||||
|
// Token count needs to be ABOVE the threshold to trigger compression
|
||||||
const originalTokenCount =
|
const originalTokenCount =
|
||||||
MOCKED_TOKEN_LIMIT * MOCKED_CONTEXT_PERCENTAGE_THRESHOLD;
|
MOCKED_TOKEN_LIMIT * MOCKED_CONTEXT_PERCENTAGE_THRESHOLD + 1;
|
||||||
|
|
||||||
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(
|
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(
|
||||||
originalTokenCount,
|
originalTokenCount,
|
||||||
@@ -864,7 +886,13 @@ describe('Gemini Client (client.ts)', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should always trigger summarization when force is true, regardless of token count', async () => {
|
it('should always trigger summarization when force is true, regardless of token count', async () => {
|
||||||
const history = [{ role: 'user', parts: [{ text: '...history...' }] }];
|
// Need multiple history items so there's something to compress
|
||||||
|
const history = [
|
||||||
|
{ role: 'user', parts: [{ text: '...history 1...' }] },
|
||||||
|
{ role: 'model', parts: [{ text: '...history 2...' }] },
|
||||||
|
{ role: 'user', parts: [{ text: '...history 3...' }] },
|
||||||
|
{ role: 'model', parts: [{ text: '...history 4...' }] },
|
||||||
|
];
|
||||||
mockGetHistory.mockReturnValue(history);
|
mockGetHistory.mockReturnValue(history);
|
||||||
|
|
||||||
const originalTokenCount = 100; // Well below threshold, but > estimated new count
|
const originalTokenCount = 100; // Well below threshold, but > estimated new count
|
||||||
|
|||||||
@@ -25,13 +25,11 @@ import {
|
|||||||
import type { ContentGenerator } from './contentGenerator.js';
|
import type { ContentGenerator } from './contentGenerator.js';
|
||||||
import { GeminiChat } from './geminiChat.js';
|
import { GeminiChat } from './geminiChat.js';
|
||||||
import {
|
import {
|
||||||
getCompressionPrompt,
|
|
||||||
getCoreSystemPrompt,
|
getCoreSystemPrompt,
|
||||||
getCustomSystemPrompt,
|
getCustomSystemPrompt,
|
||||||
getPlanModeSystemReminder,
|
getPlanModeSystemReminder,
|
||||||
getSubagentSystemReminder,
|
getSubagentSystemReminder,
|
||||||
} from './prompts.js';
|
} from './prompts.js';
|
||||||
import { tokenLimit } from './tokenLimits.js';
|
|
||||||
import {
|
import {
|
||||||
CompressionStatus,
|
CompressionStatus,
|
||||||
GeminiEventType,
|
GeminiEventType,
|
||||||
@@ -42,6 +40,11 @@ import {
|
|||||||
|
|
||||||
// Services
|
// Services
|
||||||
import { type ChatRecordingService } from '../services/chatRecordingService.js';
|
import { type ChatRecordingService } from '../services/chatRecordingService.js';
|
||||||
|
import {
|
||||||
|
ChatCompressionService,
|
||||||
|
COMPRESSION_PRESERVE_THRESHOLD,
|
||||||
|
COMPRESSION_TOKEN_THRESHOLD,
|
||||||
|
} from '../services/chatCompressionService.js';
|
||||||
import { LoopDetectionService } from '../services/loopDetectionService.js';
|
import { LoopDetectionService } from '../services/loopDetectionService.js';
|
||||||
|
|
||||||
// Tools
|
// Tools
|
||||||
@@ -50,21 +53,18 @@ import { TaskTool } from '../tools/task.js';
|
|||||||
// Telemetry
|
// Telemetry
|
||||||
import {
|
import {
|
||||||
NextSpeakerCheckEvent,
|
NextSpeakerCheckEvent,
|
||||||
logChatCompression,
|
|
||||||
logNextSpeakerCheck,
|
logNextSpeakerCheck,
|
||||||
makeChatCompressionEvent,
|
|
||||||
uiTelemetryService,
|
|
||||||
} from '../telemetry/index.js';
|
} from '../telemetry/index.js';
|
||||||
|
|
||||||
// Utilities
|
// Utilities
|
||||||
import {
|
import {
|
||||||
getDirectoryContextString,
|
getDirectoryContextString,
|
||||||
getEnvironmentContext,
|
getInitialChatHistory,
|
||||||
} from '../utils/environmentContext.js';
|
} from '../utils/environmentContext.js';
|
||||||
import { reportError } from '../utils/errorReporting.js';
|
import { reportError } from '../utils/errorReporting.js';
|
||||||
import { getErrorMessage } from '../utils/errors.js';
|
import { getErrorMessage } from '../utils/errors.js';
|
||||||
import { checkNextSpeaker } from '../utils/nextSpeakerChecker.js';
|
import { checkNextSpeaker } from '../utils/nextSpeakerChecker.js';
|
||||||
import { flatMapTextParts, getResponseText } from '../utils/partUtils.js';
|
import { flatMapTextParts } from '../utils/partUtils.js';
|
||||||
import { retryWithBackoff } from '../utils/retry.js';
|
import { retryWithBackoff } from '../utils/retry.js';
|
||||||
|
|
||||||
// IDE integration
|
// IDE integration
|
||||||
@@ -85,68 +85,8 @@ export function isThinkingDefault(model: string) {
|
|||||||
return model.startsWith('gemini-2.5') || model === DEFAULT_GEMINI_MODEL_AUTO;
|
return model.startsWith('gemini-2.5') || model === DEFAULT_GEMINI_MODEL_AUTO;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Returns the index of the oldest item to keep when compressing. May return
|
|
||||||
* contents.length which indicates that everything should be compressed.
|
|
||||||
*
|
|
||||||
* Exported for testing purposes.
|
|
||||||
*/
|
|
||||||
export function findCompressSplitPoint(
|
|
||||||
contents: Content[],
|
|
||||||
fraction: number,
|
|
||||||
): number {
|
|
||||||
if (fraction <= 0 || fraction >= 1) {
|
|
||||||
throw new Error('Fraction must be between 0 and 1');
|
|
||||||
}
|
|
||||||
|
|
||||||
const charCounts = contents.map((content) => JSON.stringify(content).length);
|
|
||||||
const totalCharCount = charCounts.reduce((a, b) => a + b, 0);
|
|
||||||
const targetCharCount = totalCharCount * fraction;
|
|
||||||
|
|
||||||
let lastSplitPoint = 0; // 0 is always valid (compress nothing)
|
|
||||||
let cumulativeCharCount = 0;
|
|
||||||
for (let i = 0; i < contents.length; i++) {
|
|
||||||
const content = contents[i];
|
|
||||||
if (
|
|
||||||
content.role === 'user' &&
|
|
||||||
!content.parts?.some((part) => !!part.functionResponse)
|
|
||||||
) {
|
|
||||||
if (cumulativeCharCount >= targetCharCount) {
|
|
||||||
return i;
|
|
||||||
}
|
|
||||||
lastSplitPoint = i;
|
|
||||||
}
|
|
||||||
cumulativeCharCount += charCounts[i];
|
|
||||||
}
|
|
||||||
|
|
||||||
// We found no split points after targetCharCount.
|
|
||||||
// Check if it's safe to compress everything.
|
|
||||||
const lastContent = contents[contents.length - 1];
|
|
||||||
if (
|
|
||||||
lastContent?.role === 'model' &&
|
|
||||||
!lastContent?.parts?.some((part) => part.functionCall)
|
|
||||||
) {
|
|
||||||
return contents.length;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Can't compress everything so just compress at last splitpoint.
|
|
||||||
return lastSplitPoint;
|
|
||||||
}
|
|
||||||
|
|
||||||
const MAX_TURNS = 100;
|
const MAX_TURNS = 100;
|
||||||
|
|
||||||
/**
|
|
||||||
* Threshold for compression token count as a fraction of the model's token limit.
|
|
||||||
* If the chat history exceeds this threshold, it will be compressed.
|
|
||||||
*/
|
|
||||||
const COMPRESSION_TOKEN_THRESHOLD = 0.7;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* The fraction of the latest chat history to keep. A value of 0.3
|
|
||||||
* means that only the last 30% of the chat history will be kept after compression.
|
|
||||||
*/
|
|
||||||
const COMPRESSION_PRESERVE_THRESHOLD = 0.3;
|
|
||||||
|
|
||||||
export class GeminiClient {
|
export class GeminiClient {
|
||||||
private chat?: GeminiChat;
|
private chat?: GeminiChat;
|
||||||
private readonly generateContentConfig: GenerateContentConfig = {
|
private readonly generateContentConfig: GenerateContentConfig = {
|
||||||
@@ -243,23 +183,13 @@ export class GeminiClient {
|
|||||||
async startChat(extraHistory?: Content[]): Promise<GeminiChat> {
|
async startChat(extraHistory?: Content[]): Promise<GeminiChat> {
|
||||||
this.forceFullIdeContext = true;
|
this.forceFullIdeContext = true;
|
||||||
this.hasFailedCompressionAttempt = false;
|
this.hasFailedCompressionAttempt = false;
|
||||||
const envParts = await getEnvironmentContext(this.config);
|
|
||||||
|
|
||||||
const toolRegistry = this.config.getToolRegistry();
|
const toolRegistry = this.config.getToolRegistry();
|
||||||
const toolDeclarations = toolRegistry.getFunctionDeclarations();
|
const toolDeclarations = toolRegistry.getFunctionDeclarations();
|
||||||
const tools: Tool[] = [{ functionDeclarations: toolDeclarations }];
|
const tools: Tool[] = [{ functionDeclarations: toolDeclarations }];
|
||||||
|
|
||||||
const history: Content[] = [
|
const history = await getInitialChatHistory(this.config, extraHistory);
|
||||||
{
|
|
||||||
role: 'user',
|
|
||||||
parts: envParts,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
role: 'model',
|
|
||||||
parts: [{ text: 'Got it. Thanks for the context!' }],
|
|
||||||
},
|
|
||||||
...(extraHistory ?? []),
|
|
||||||
];
|
|
||||||
try {
|
try {
|
||||||
const userMemory = this.config.getUserMemory();
|
const userMemory = this.config.getUserMemory();
|
||||||
const model = this.config.getModel();
|
const model = this.config.getModel();
|
||||||
@@ -503,14 +433,15 @@ export class GeminiClient {
|
|||||||
userMemory,
|
userMemory,
|
||||||
this.config.getModel(),
|
this.config.getModel(),
|
||||||
);
|
);
|
||||||
const environment = await getEnvironmentContext(this.config);
|
const initialHistory = await getInitialChatHistory(this.config);
|
||||||
|
|
||||||
// Create a mock request content to count total tokens
|
// Create a mock request content to count total tokens
|
||||||
const mockRequestContent = [
|
const mockRequestContent = [
|
||||||
{
|
{
|
||||||
role: 'system' as const,
|
role: 'system' as const,
|
||||||
parts: [{ text: systemPrompt }, ...environment],
|
parts: [{ text: systemPrompt }],
|
||||||
},
|
},
|
||||||
|
...initialHistory,
|
||||||
...currentHistory,
|
...currentHistory,
|
||||||
];
|
];
|
||||||
|
|
||||||
@@ -732,127 +663,37 @@ export class GeminiClient {
|
|||||||
prompt_id: string,
|
prompt_id: string,
|
||||||
force: boolean = false,
|
force: boolean = false,
|
||||||
): Promise<ChatCompressionInfo> {
|
): Promise<ChatCompressionInfo> {
|
||||||
const model = this.config.getModel();
|
const compressionService = new ChatCompressionService();
|
||||||
|
|
||||||
const curatedHistory = this.getChat().getHistory(true);
|
const { newHistory, info } = await compressionService.compress(
|
||||||
|
this.getChat(),
|
||||||
|
prompt_id,
|
||||||
|
force,
|
||||||
|
this.config.getModel(),
|
||||||
|
this.config,
|
||||||
|
this.hasFailedCompressionAttempt,
|
||||||
|
);
|
||||||
|
|
||||||
// Regardless of `force`, don't do anything if the history is empty.
|
// Handle compression result
|
||||||
if (
|
if (info.compressionStatus === CompressionStatus.COMPRESSED) {
|
||||||
curatedHistory.length === 0 ||
|
// Success: update chat with new compressed history
|
||||||
(this.hasFailedCompressionAttempt && !force)
|
if (newHistory) {
|
||||||
|
this.chat = await this.startChat(newHistory);
|
||||||
|
this.forceFullIdeContext = true;
|
||||||
|
}
|
||||||
|
} else if (
|
||||||
|
info.compressionStatus ===
|
||||||
|
CompressionStatus.COMPRESSION_FAILED_INFLATED_TOKEN_COUNT ||
|
||||||
|
info.compressionStatus ===
|
||||||
|
CompressionStatus.COMPRESSION_FAILED_EMPTY_SUMMARY
|
||||||
) {
|
) {
|
||||||
return {
|
// Track failed attempts (only mark as failed if not forced)
|
||||||
originalTokenCount: 0,
|
if (!force) {
|
||||||
newTokenCount: 0,
|
this.hasFailedCompressionAttempt = true;
|
||||||
compressionStatus: CompressionStatus.NOOP,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
const originalTokenCount = uiTelemetryService.getLastPromptTokenCount();
|
|
||||||
|
|
||||||
const contextPercentageThreshold =
|
|
||||||
this.config.getChatCompression()?.contextPercentageThreshold;
|
|
||||||
|
|
||||||
// Don't compress if not forced and we are under the limit.
|
|
||||||
if (!force) {
|
|
||||||
const threshold =
|
|
||||||
contextPercentageThreshold ?? COMPRESSION_TOKEN_THRESHOLD;
|
|
||||||
if (originalTokenCount < threshold * tokenLimit(model)) {
|
|
||||||
return {
|
|
||||||
originalTokenCount,
|
|
||||||
newTokenCount: originalTokenCount,
|
|
||||||
compressionStatus: CompressionStatus.NOOP,
|
|
||||||
};
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
const splitPoint = findCompressSplitPoint(
|
return info;
|
||||||
curatedHistory,
|
|
||||||
1 - COMPRESSION_PRESERVE_THRESHOLD,
|
|
||||||
);
|
|
||||||
|
|
||||||
const historyToCompress = curatedHistory.slice(0, splitPoint);
|
|
||||||
const historyToKeep = curatedHistory.slice(splitPoint);
|
|
||||||
|
|
||||||
const summaryResponse = await this.config
|
|
||||||
.getContentGenerator()
|
|
||||||
.generateContent(
|
|
||||||
{
|
|
||||||
model,
|
|
||||||
contents: [
|
|
||||||
...historyToCompress,
|
|
||||||
{
|
|
||||||
role: 'user',
|
|
||||||
parts: [
|
|
||||||
{
|
|
||||||
text: 'First, reason in your scratchpad. Then, generate the <state_snapshot>.',
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
config: {
|
|
||||||
systemInstruction: { text: getCompressionPrompt() },
|
|
||||||
},
|
|
||||||
},
|
|
||||||
prompt_id,
|
|
||||||
);
|
|
||||||
const summary = getResponseText(summaryResponse) ?? '';
|
|
||||||
|
|
||||||
const chat = await this.startChat([
|
|
||||||
{
|
|
||||||
role: 'user',
|
|
||||||
parts: [{ text: summary }],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
role: 'model',
|
|
||||||
parts: [{ text: 'Got it. Thanks for the additional context!' }],
|
|
||||||
},
|
|
||||||
...historyToKeep,
|
|
||||||
]);
|
|
||||||
this.forceFullIdeContext = true;
|
|
||||||
|
|
||||||
// Estimate token count 1 token ≈ 4 characters
|
|
||||||
const newTokenCount = Math.floor(
|
|
||||||
chat
|
|
||||||
.getHistory()
|
|
||||||
.reduce((total, content) => total + JSON.stringify(content).length, 0) /
|
|
||||||
4,
|
|
||||||
);
|
|
||||||
|
|
||||||
logChatCompression(
|
|
||||||
this.config,
|
|
||||||
makeChatCompressionEvent({
|
|
||||||
tokens_before: originalTokenCount,
|
|
||||||
tokens_after: newTokenCount,
|
|
||||||
}),
|
|
||||||
);
|
|
||||||
|
|
||||||
if (newTokenCount > originalTokenCount) {
|
|
||||||
this.hasFailedCompressionAttempt = !force && true;
|
|
||||||
return {
|
|
||||||
originalTokenCount,
|
|
||||||
newTokenCount,
|
|
||||||
compressionStatus:
|
|
||||||
CompressionStatus.COMPRESSION_FAILED_INFLATED_TOKEN_COUNT,
|
|
||||||
};
|
|
||||||
} else {
|
|
||||||
this.chat = chat; // Chat compression successful, set new state.
|
|
||||||
uiTelemetryService.setLastPromptTokenCount(newTokenCount);
|
|
||||||
}
|
|
||||||
|
|
||||||
logChatCompression(
|
|
||||||
this.config,
|
|
||||||
makeChatCompressionEvent({
|
|
||||||
tokens_before: originalTokenCount,
|
|
||||||
tokens_after: newTokenCount,
|
|
||||||
}),
|
|
||||||
);
|
|
||||||
|
|
||||||
return {
|
|
||||||
originalTokenCount,
|
|
||||||
newTokenCount,
|
|
||||||
compressionStatus: CompressionStatus.COMPRESSED,
|
|
||||||
};
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -4,13 +4,9 @@
|
|||||||
* SPDX-License-Identifier: Apache-2.0
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
import { describe, it, expect, vi } from 'vitest';
|
||||||
import type { ContentGenerator } from './contentGenerator.js';
|
import type { ContentGenerator } from './contentGenerator.js';
|
||||||
import {
|
import { createContentGenerator, AuthType } from './contentGenerator.js';
|
||||||
createContentGenerator,
|
|
||||||
AuthType,
|
|
||||||
createContentGeneratorConfig,
|
|
||||||
} from './contentGenerator.js';
|
|
||||||
import { createCodeAssistContentGenerator } from '../code_assist/codeAssist.js';
|
import { createCodeAssistContentGenerator } from '../code_assist/codeAssist.js';
|
||||||
import { GoogleGenAI } from '@google/genai';
|
import { GoogleGenAI } from '@google/genai';
|
||||||
import type { Config } from '../config/config.js';
|
import type { Config } from '../config/config.js';
|
||||||
@@ -110,83 +106,3 @@ describe('createContentGenerator', () => {
|
|||||||
);
|
);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('createContentGeneratorConfig', () => {
|
|
||||||
const mockConfig = {
|
|
||||||
getModel: vi.fn().mockReturnValue('gemini-pro'),
|
|
||||||
setModel: vi.fn(),
|
|
||||||
flashFallbackHandler: vi.fn(),
|
|
||||||
getProxy: vi.fn(),
|
|
||||||
getEnableOpenAILogging: vi.fn().mockReturnValue(false),
|
|
||||||
getSamplingParams: vi.fn().mockReturnValue(undefined),
|
|
||||||
getContentGeneratorTimeout: vi.fn().mockReturnValue(undefined),
|
|
||||||
getContentGeneratorMaxRetries: vi.fn().mockReturnValue(undefined),
|
|
||||||
getContentGeneratorDisableCacheControl: vi.fn().mockReturnValue(undefined),
|
|
||||||
getContentGeneratorSamplingParams: vi.fn().mockReturnValue(undefined),
|
|
||||||
getCliVersion: vi.fn().mockReturnValue('1.0.0'),
|
|
||||||
} as unknown as Config;
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
// Reset modules to re-evaluate imports and environment variables
|
|
||||||
vi.resetModules();
|
|
||||||
vi.clearAllMocks();
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
vi.unstubAllEnvs();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should configure for Gemini using GEMINI_API_KEY when set', async () => {
|
|
||||||
vi.stubEnv('GEMINI_API_KEY', 'env-gemini-key');
|
|
||||||
const config = await createContentGeneratorConfig(
|
|
||||||
mockConfig,
|
|
||||||
AuthType.USE_GEMINI,
|
|
||||||
);
|
|
||||||
expect(config.apiKey).toBe('env-gemini-key');
|
|
||||||
expect(config.vertexai).toBe(false);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not configure for Gemini if GEMINI_API_KEY is empty', async () => {
|
|
||||||
vi.stubEnv('GEMINI_API_KEY', '');
|
|
||||||
const config = await createContentGeneratorConfig(
|
|
||||||
mockConfig,
|
|
||||||
AuthType.USE_GEMINI,
|
|
||||||
);
|
|
||||||
expect(config.apiKey).toBeUndefined();
|
|
||||||
expect(config.vertexai).toBeUndefined();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should configure for Vertex AI using GOOGLE_API_KEY when set', async () => {
|
|
||||||
vi.stubEnv('GOOGLE_API_KEY', 'env-google-key');
|
|
||||||
const config = await createContentGeneratorConfig(
|
|
||||||
mockConfig,
|
|
||||||
AuthType.USE_VERTEX_AI,
|
|
||||||
);
|
|
||||||
expect(config.apiKey).toBe('env-google-key');
|
|
||||||
expect(config.vertexai).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should configure for Vertex AI using GCP project and location when set', async () => {
|
|
||||||
vi.stubEnv('GOOGLE_API_KEY', undefined);
|
|
||||||
vi.stubEnv('GOOGLE_CLOUD_PROJECT', 'env-gcp-project');
|
|
||||||
vi.stubEnv('GOOGLE_CLOUD_LOCATION', 'env-gcp-location');
|
|
||||||
const config = await createContentGeneratorConfig(
|
|
||||||
mockConfig,
|
|
||||||
AuthType.USE_VERTEX_AI,
|
|
||||||
);
|
|
||||||
expect(config.vertexai).toBe(true);
|
|
||||||
expect(config.apiKey).toBeUndefined();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not configure for Vertex AI if required env vars are empty', async () => {
|
|
||||||
vi.stubEnv('GOOGLE_API_KEY', '');
|
|
||||||
vi.stubEnv('GOOGLE_CLOUD_PROJECT', '');
|
|
||||||
vi.stubEnv('GOOGLE_CLOUD_LOCATION', '');
|
|
||||||
const config = await createContentGeneratorConfig(
|
|
||||||
mockConfig,
|
|
||||||
AuthType.USE_VERTEX_AI,
|
|
||||||
);
|
|
||||||
expect(config.apiKey).toBeUndefined();
|
|
||||||
expect(config.vertexai).toBeUndefined();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|||||||
@@ -14,8 +14,8 @@ import type {
|
|||||||
} from '@google/genai';
|
} from '@google/genai';
|
||||||
import { GoogleGenAI } from '@google/genai';
|
import { GoogleGenAI } from '@google/genai';
|
||||||
import { createCodeAssistContentGenerator } from '../code_assist/codeAssist.js';
|
import { createCodeAssistContentGenerator } from '../code_assist/codeAssist.js';
|
||||||
import type { Config } from '../config/config.js';
|
|
||||||
import { DEFAULT_QWEN_MODEL } from '../config/models.js';
|
import { DEFAULT_QWEN_MODEL } from '../config/models.js';
|
||||||
|
import type { Config } from '../config/config.js';
|
||||||
|
|
||||||
import type { UserTierId } from '../code_assist/types.js';
|
import type { UserTierId } from '../code_assist/types.js';
|
||||||
import { InstallationManager } from '../utils/installationManager.js';
|
import { InstallationManager } from '../utils/installationManager.js';
|
||||||
@@ -58,6 +58,7 @@ export type ContentGeneratorConfig = {
|
|||||||
vertexai?: boolean;
|
vertexai?: boolean;
|
||||||
authType?: AuthType | undefined;
|
authType?: AuthType | undefined;
|
||||||
enableOpenAILogging?: boolean;
|
enableOpenAILogging?: boolean;
|
||||||
|
openAILoggingDir?: string;
|
||||||
// Timeout configuration in milliseconds
|
// Timeout configuration in milliseconds
|
||||||
timeout?: number;
|
timeout?: number;
|
||||||
// Maximum retries for failed requests
|
// Maximum retries for failed requests
|
||||||
@@ -82,53 +83,37 @@ export function createContentGeneratorConfig(
|
|||||||
authType: AuthType | undefined,
|
authType: AuthType | undefined,
|
||||||
generationConfig?: Partial<ContentGeneratorConfig>,
|
generationConfig?: Partial<ContentGeneratorConfig>,
|
||||||
): ContentGeneratorConfig {
|
): ContentGeneratorConfig {
|
||||||
const geminiApiKey = process.env['GEMINI_API_KEY'] || undefined;
|
const newContentGeneratorConfig: Partial<ContentGeneratorConfig> = {
|
||||||
const googleApiKey = process.env['GOOGLE_API_KEY'] || undefined;
|
|
||||||
const googleCloudProject = process.env['GOOGLE_CLOUD_PROJECT'] || undefined;
|
|
||||||
const googleCloudLocation = process.env['GOOGLE_CLOUD_LOCATION'] || undefined;
|
|
||||||
|
|
||||||
const newContentGeneratorConfig: ContentGeneratorConfig = {
|
|
||||||
...(generationConfig || {}),
|
...(generationConfig || {}),
|
||||||
model: generationConfig?.model || DEFAULT_QWEN_MODEL,
|
|
||||||
authType,
|
authType,
|
||||||
proxy: config?.getProxy(),
|
proxy: config?.getProxy(),
|
||||||
};
|
};
|
||||||
|
|
||||||
// If we are using Google auth or we are in Cloud Shell, there is nothing else to validate for now
|
|
||||||
if (
|
|
||||||
authType === AuthType.LOGIN_WITH_GOOGLE ||
|
|
||||||
authType === AuthType.CLOUD_SHELL
|
|
||||||
) {
|
|
||||||
return newContentGeneratorConfig;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (authType === AuthType.USE_GEMINI && geminiApiKey) {
|
|
||||||
newContentGeneratorConfig.apiKey = geminiApiKey;
|
|
||||||
newContentGeneratorConfig.vertexai = false;
|
|
||||||
|
|
||||||
return newContentGeneratorConfig;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (
|
|
||||||
authType === AuthType.USE_VERTEX_AI &&
|
|
||||||
(googleApiKey || (googleCloudProject && googleCloudLocation))
|
|
||||||
) {
|
|
||||||
newContentGeneratorConfig.apiKey = googleApiKey;
|
|
||||||
newContentGeneratorConfig.vertexai = true;
|
|
||||||
|
|
||||||
return newContentGeneratorConfig;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (authType === AuthType.QWEN_OAUTH) {
|
if (authType === AuthType.QWEN_OAUTH) {
|
||||||
// For Qwen OAuth, we'll handle the API key dynamically in createContentGenerator
|
// For Qwen OAuth, we'll handle the API key dynamically in createContentGenerator
|
||||||
// Set a special marker to indicate this is Qwen OAuth
|
// Set a special marker to indicate this is Qwen OAuth
|
||||||
newContentGeneratorConfig.apiKey = 'QWEN_OAUTH_DYNAMIC_TOKEN';
|
return {
|
||||||
newContentGeneratorConfig.model = DEFAULT_QWEN_MODEL;
|
...newContentGeneratorConfig,
|
||||||
|
model: DEFAULT_QWEN_MODEL,
|
||||||
return newContentGeneratorConfig;
|
apiKey: 'QWEN_OAUTH_DYNAMIC_TOKEN',
|
||||||
|
} as ContentGeneratorConfig;
|
||||||
}
|
}
|
||||||
|
|
||||||
return newContentGeneratorConfig;
|
if (authType === AuthType.USE_OPENAI) {
|
||||||
|
if (!newContentGeneratorConfig.apiKey) {
|
||||||
|
throw new Error('OpenAI API key is required');
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
...newContentGeneratorConfig,
|
||||||
|
model: newContentGeneratorConfig?.model || 'qwen3-coder-plus',
|
||||||
|
} as ContentGeneratorConfig;
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
...newContentGeneratorConfig,
|
||||||
|
model: newContentGeneratorConfig?.model || DEFAULT_QWEN_MODEL,
|
||||||
|
} as ContentGeneratorConfig;
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function createContentGenerator(
|
export async function createContentGenerator(
|
||||||
|
|||||||
@@ -1540,6 +1540,268 @@ describe('CoreToolScheduler request queueing', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
|
describe('CoreToolScheduler Sequential Execution', () => {
|
||||||
|
it('should execute tool calls in a batch sequentially', async () => {
|
||||||
|
// Arrange
|
||||||
|
let firstCallFinished = false;
|
||||||
|
const executeFn = vi
|
||||||
|
.fn()
|
||||||
|
.mockImplementation(async (args: { call: number }) => {
|
||||||
|
if (args.call === 1) {
|
||||||
|
// First call, wait for a bit to simulate work
|
||||||
|
await new Promise((resolve) => setTimeout(resolve, 50));
|
||||||
|
firstCallFinished = true;
|
||||||
|
return { llmContent: 'First call done' };
|
||||||
|
}
|
||||||
|
if (args.call === 2) {
|
||||||
|
// Second call, should only happen after the first is finished
|
||||||
|
if (!firstCallFinished) {
|
||||||
|
throw new Error(
|
||||||
|
'Second tool call started before the first one finished!',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
return { llmContent: 'Second call done' };
|
||||||
|
}
|
||||||
|
return { llmContent: 'default' };
|
||||||
|
});
|
||||||
|
|
||||||
|
const mockTool = new MockTool({ name: 'mockTool', execute: executeFn });
|
||||||
|
const declarativeTool = mockTool;
|
||||||
|
|
||||||
|
const mockToolRegistry = {
|
||||||
|
getTool: () => declarativeTool,
|
||||||
|
getToolByName: () => declarativeTool,
|
||||||
|
getFunctionDeclarations: () => [],
|
||||||
|
tools: new Map(),
|
||||||
|
discovery: {},
|
||||||
|
registerTool: () => {},
|
||||||
|
getToolByDisplayName: () => declarativeTool,
|
||||||
|
getTools: () => [],
|
||||||
|
discoverTools: async () => {},
|
||||||
|
getAllTools: () => [],
|
||||||
|
getToolsByServer: () => [],
|
||||||
|
} as unknown as ToolRegistry;
|
||||||
|
|
||||||
|
const onAllToolCallsComplete = vi.fn();
|
||||||
|
const onToolCallsUpdate = vi.fn();
|
||||||
|
|
||||||
|
const mockConfig = {
|
||||||
|
getSessionId: () => 'test-session-id',
|
||||||
|
getUsageStatisticsEnabled: () => true,
|
||||||
|
getDebugMode: () => false,
|
||||||
|
getApprovalMode: () => ApprovalMode.YOLO, // Use YOLO to avoid confirmation prompts
|
||||||
|
getAllowedTools: () => [],
|
||||||
|
getContentGeneratorConfig: () => ({
|
||||||
|
model: 'test-model',
|
||||||
|
authType: 'oauth-personal',
|
||||||
|
}),
|
||||||
|
getShellExecutionConfig: () => ({
|
||||||
|
terminalWidth: 90,
|
||||||
|
terminalHeight: 30,
|
||||||
|
}),
|
||||||
|
storage: {
|
||||||
|
getProjectTempDir: () => '/tmp',
|
||||||
|
},
|
||||||
|
getToolRegistry: () => mockToolRegistry,
|
||||||
|
getTruncateToolOutputThreshold: () =>
|
||||||
|
DEFAULT_TRUNCATE_TOOL_OUTPUT_THRESHOLD,
|
||||||
|
getTruncateToolOutputLines: () => DEFAULT_TRUNCATE_TOOL_OUTPUT_LINES,
|
||||||
|
getUseSmartEdit: () => false,
|
||||||
|
getUseModelRouter: () => false,
|
||||||
|
getGeminiClient: () => null,
|
||||||
|
} as unknown as Config;
|
||||||
|
|
||||||
|
const scheduler = new CoreToolScheduler({
|
||||||
|
config: mockConfig,
|
||||||
|
onAllToolCallsComplete,
|
||||||
|
onToolCallsUpdate,
|
||||||
|
getPreferredEditor: () => 'vscode',
|
||||||
|
onEditorClose: vi.fn(),
|
||||||
|
});
|
||||||
|
|
||||||
|
const abortController = new AbortController();
|
||||||
|
const requests = [
|
||||||
|
{
|
||||||
|
callId: '1',
|
||||||
|
name: 'mockTool',
|
||||||
|
args: { call: 1 },
|
||||||
|
isClientInitiated: false,
|
||||||
|
prompt_id: 'prompt-1',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
callId: '2',
|
||||||
|
name: 'mockTool',
|
||||||
|
args: { call: 2 },
|
||||||
|
isClientInitiated: false,
|
||||||
|
prompt_id: 'prompt-1',
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
// Act
|
||||||
|
await scheduler.schedule(requests, abortController.signal);
|
||||||
|
|
||||||
|
// Assert
|
||||||
|
await vi.waitFor(() => {
|
||||||
|
expect(onAllToolCallsComplete).toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
// Check that execute was called twice
|
||||||
|
expect(executeFn).toHaveBeenCalledTimes(2);
|
||||||
|
|
||||||
|
// Check the order of calls
|
||||||
|
const calls = executeFn.mock.calls;
|
||||||
|
expect(calls[0][0]).toEqual({ call: 1 });
|
||||||
|
expect(calls[1][0]).toEqual({ call: 2 });
|
||||||
|
|
||||||
|
// The onAllToolCallsComplete should be called once with both results
|
||||||
|
const completedCalls = onAllToolCallsComplete.mock
|
||||||
|
.calls[0][0] as ToolCall[];
|
||||||
|
expect(completedCalls).toHaveLength(2);
|
||||||
|
expect(completedCalls[0].status).toBe('success');
|
||||||
|
expect(completedCalls[1].status).toBe('success');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should cancel subsequent tools when the signal is aborted.', async () => {
|
||||||
|
// Arrange
|
||||||
|
const abortController = new AbortController();
|
||||||
|
let secondCallStarted = false;
|
||||||
|
|
||||||
|
const executeFn = vi
|
||||||
|
.fn()
|
||||||
|
.mockImplementation(async (args: { call: number }) => {
|
||||||
|
if (args.call === 1) {
|
||||||
|
return { llmContent: 'First call done' };
|
||||||
|
}
|
||||||
|
if (args.call === 2) {
|
||||||
|
secondCallStarted = true;
|
||||||
|
// This call will be cancelled while it's "running".
|
||||||
|
await new Promise((resolve) => setTimeout(resolve, 100));
|
||||||
|
// It should not return a value because it will be cancelled.
|
||||||
|
return { llmContent: 'Second call should not complete' };
|
||||||
|
}
|
||||||
|
if (args.call === 3) {
|
||||||
|
return { llmContent: 'Third call done' };
|
||||||
|
}
|
||||||
|
return { llmContent: 'default' };
|
||||||
|
});
|
||||||
|
|
||||||
|
const mockTool = new MockTool({ name: 'mockTool', execute: executeFn });
|
||||||
|
const declarativeTool = mockTool;
|
||||||
|
|
||||||
|
const mockToolRegistry = {
|
||||||
|
getTool: () => declarativeTool,
|
||||||
|
getToolByName: () => declarativeTool,
|
||||||
|
getFunctionDeclarations: () => [],
|
||||||
|
tools: new Map(),
|
||||||
|
discovery: {},
|
||||||
|
registerTool: () => {},
|
||||||
|
getToolByDisplayName: () => declarativeTool,
|
||||||
|
getTools: () => [],
|
||||||
|
discoverTools: async () => {},
|
||||||
|
getAllTools: () => [],
|
||||||
|
getToolsByServer: () => [],
|
||||||
|
} as unknown as ToolRegistry;
|
||||||
|
|
||||||
|
const onAllToolCallsComplete = vi.fn();
|
||||||
|
const onToolCallsUpdate = vi.fn();
|
||||||
|
|
||||||
|
const mockConfig = {
|
||||||
|
getSessionId: () => 'test-session-id',
|
||||||
|
getUsageStatisticsEnabled: () => true,
|
||||||
|
getDebugMode: () => false,
|
||||||
|
getApprovalMode: () => ApprovalMode.YOLO,
|
||||||
|
getAllowedTools: () => [],
|
||||||
|
getContentGeneratorConfig: () => ({
|
||||||
|
model: 'test-model',
|
||||||
|
authType: 'oauth-personal',
|
||||||
|
}),
|
||||||
|
getShellExecutionConfig: () => ({
|
||||||
|
terminalWidth: 90,
|
||||||
|
terminalHeight: 30,
|
||||||
|
}),
|
||||||
|
storage: {
|
||||||
|
getProjectTempDir: () => '/tmp',
|
||||||
|
},
|
||||||
|
getToolRegistry: () => mockToolRegistry,
|
||||||
|
getTruncateToolOutputThreshold: () =>
|
||||||
|
DEFAULT_TRUNCATE_TOOL_OUTPUT_THRESHOLD,
|
||||||
|
getTruncateToolOutputLines: () => DEFAULT_TRUNCATE_TOOL_OUTPUT_LINES,
|
||||||
|
getUseSmartEdit: () => false,
|
||||||
|
getUseModelRouter: () => false,
|
||||||
|
getGeminiClient: () => null,
|
||||||
|
} as unknown as Config;
|
||||||
|
|
||||||
|
const scheduler = new CoreToolScheduler({
|
||||||
|
config: mockConfig,
|
||||||
|
onAllToolCallsComplete,
|
||||||
|
onToolCallsUpdate,
|
||||||
|
getPreferredEditor: () => 'vscode',
|
||||||
|
onEditorClose: vi.fn(),
|
||||||
|
});
|
||||||
|
|
||||||
|
const requests = [
|
||||||
|
{
|
||||||
|
callId: '1',
|
||||||
|
name: 'mockTool',
|
||||||
|
args: { call: 1 },
|
||||||
|
isClientInitiated: false,
|
||||||
|
prompt_id: 'prompt-1',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
callId: '2',
|
||||||
|
name: 'mockTool',
|
||||||
|
args: { call: 2 },
|
||||||
|
isClientInitiated: false,
|
||||||
|
prompt_id: 'prompt-1',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
callId: '3',
|
||||||
|
name: 'mockTool',
|
||||||
|
args: { call: 3 },
|
||||||
|
isClientInitiated: false,
|
||||||
|
prompt_id: 'prompt-1',
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
// Act
|
||||||
|
const schedulePromise = scheduler.schedule(
|
||||||
|
requests,
|
||||||
|
abortController.signal,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Wait for the second call to start, then abort.
|
||||||
|
await vi.waitFor(() => {
|
||||||
|
expect(secondCallStarted).toBe(true);
|
||||||
|
});
|
||||||
|
abortController.abort();
|
||||||
|
|
||||||
|
await schedulePromise;
|
||||||
|
|
||||||
|
// Assert
|
||||||
|
await vi.waitFor(() => {
|
||||||
|
expect(onAllToolCallsComplete).toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
// Check that execute was called for all three tools initially
|
||||||
|
expect(executeFn).toHaveBeenCalledTimes(3);
|
||||||
|
expect(executeFn).toHaveBeenCalledWith({ call: 1 });
|
||||||
|
expect(executeFn).toHaveBeenCalledWith({ call: 2 });
|
||||||
|
expect(executeFn).toHaveBeenCalledWith({ call: 3 });
|
||||||
|
|
||||||
|
const completedCalls = onAllToolCallsComplete.mock
|
||||||
|
.calls[0][0] as ToolCall[];
|
||||||
|
expect(completedCalls).toHaveLength(3);
|
||||||
|
|
||||||
|
const call1 = completedCalls.find((c) => c.request.callId === '1');
|
||||||
|
const call2 = completedCalls.find((c) => c.request.callId === '2');
|
||||||
|
const call3 = completedCalls.find((c) => c.request.callId === '3');
|
||||||
|
|
||||||
|
expect(call1?.status).toBe('success');
|
||||||
|
expect(call2?.status).toBe('cancelled');
|
||||||
|
expect(call3?.status).toBe('cancelled');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
describe('truncateAndSaveToFile', () => {
|
describe('truncateAndSaveToFile', () => {
|
||||||
const mockWriteFile = vi.mocked(fs.writeFile);
|
const mockWriteFile = vi.mocked(fs.writeFile);
|
||||||
const THRESHOLD = 40_000;
|
const THRESHOLD = 40_000;
|
||||||
@@ -1719,14 +1981,14 @@ describe('truncateAndSaveToFile', () => {
|
|||||||
);
|
);
|
||||||
|
|
||||||
expect(result.content).toContain(
|
expect(result.content).toContain(
|
||||||
'read_file tool with the absolute file path above',
|
'Tool output was too large and has been truncated',
|
||||||
);
|
);
|
||||||
expect(result.content).toContain('read_file tool with offset=0, limit=100');
|
expect(result.content).toContain('The full output has been saved to:');
|
||||||
expect(result.content).toContain(
|
expect(result.content).toContain(
|
||||||
'read_file tool with offset=N to skip N lines',
|
'To read the complete output, use the read_file tool with the absolute file path above',
|
||||||
);
|
);
|
||||||
expect(result.content).toContain(
|
expect(result.content).toContain(
|
||||||
'read_file tool with limit=M to read only M lines',
|
'The truncated output below shows the beginning and end of the content',
|
||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@@ -299,10 +299,7 @@ export async function truncateAndSaveToFile(
|
|||||||
return {
|
return {
|
||||||
content: `Tool output was too large and has been truncated.
|
content: `Tool output was too large and has been truncated.
|
||||||
The full output has been saved to: ${outputFile}
|
The full output has been saved to: ${outputFile}
|
||||||
To read the complete output, use the ${ReadFileTool.Name} tool with the absolute file path above. For large files, you can use the offset and limit parameters to read specific sections:
|
To read the complete output, use the ${ReadFileTool.Name} tool with the absolute file path above.
|
||||||
- ${ReadFileTool.Name} tool with offset=0, limit=100 to see the first 100 lines
|
|
||||||
- ${ReadFileTool.Name} tool with offset=N to skip N lines from the beginning
|
|
||||||
- ${ReadFileTool.Name} tool with limit=M to read only M lines at a time
|
|
||||||
The truncated output below shows the beginning and end of the content. The marker '... [CONTENT TRUNCATED] ...' indicates where content was removed.
|
The truncated output below shows the beginning and end of the content. The marker '... [CONTENT TRUNCATED] ...' indicates where content was removed.
|
||||||
This allows you to efficiently examine different parts of the output without loading the entire file.
|
This allows you to efficiently examine different parts of the output without loading the entire file.
|
||||||
Truncated part of the output:
|
Truncated part of the output:
|
||||||
@@ -846,7 +843,7 @@ export class CoreToolScheduler {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
this.attemptExecutionOfScheduledCalls(signal);
|
await this.attemptExecutionOfScheduledCalls(signal);
|
||||||
void this.checkAndNotifyCompletion();
|
void this.checkAndNotifyCompletion();
|
||||||
} finally {
|
} finally {
|
||||||
this.isScheduling = false;
|
this.isScheduling = false;
|
||||||
@@ -921,7 +918,7 @@ export class CoreToolScheduler {
|
|||||||
}
|
}
|
||||||
this.setStatusInternal(callId, 'scheduled');
|
this.setStatusInternal(callId, 'scheduled');
|
||||||
}
|
}
|
||||||
this.attemptExecutionOfScheduledCalls(signal);
|
await this.attemptExecutionOfScheduledCalls(signal);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -967,7 +964,9 @@ export class CoreToolScheduler {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
private attemptExecutionOfScheduledCalls(signal: AbortSignal): void {
|
private async attemptExecutionOfScheduledCalls(
|
||||||
|
signal: AbortSignal,
|
||||||
|
): Promise<void> {
|
||||||
const allCallsFinalOrScheduled = this.toolCalls.every(
|
const allCallsFinalOrScheduled = this.toolCalls.every(
|
||||||
(call) =>
|
(call) =>
|
||||||
call.status === 'scheduled' ||
|
call.status === 'scheduled' ||
|
||||||
@@ -981,8 +980,8 @@ export class CoreToolScheduler {
|
|||||||
(call) => call.status === 'scheduled',
|
(call) => call.status === 'scheduled',
|
||||||
);
|
);
|
||||||
|
|
||||||
callsToExecute.forEach((toolCall) => {
|
for (const toolCall of callsToExecute) {
|
||||||
if (toolCall.status !== 'scheduled') return;
|
if (toolCall.status !== 'scheduled') continue;
|
||||||
|
|
||||||
const scheduledCall = toolCall;
|
const scheduledCall = toolCall;
|
||||||
const { callId, name: toolName } = scheduledCall.request;
|
const { callId, name: toolName } = scheduledCall.request;
|
||||||
@@ -1033,107 +1032,106 @@ export class CoreToolScheduler {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
promise
|
try {
|
||||||
.then(async (toolResult: ToolResult) => {
|
const toolResult: ToolResult = await promise;
|
||||||
if (signal.aborted) {
|
if (signal.aborted) {
|
||||||
this.setStatusInternal(
|
this.setStatusInternal(
|
||||||
callId,
|
callId,
|
||||||
'cancelled',
|
'cancelled',
|
||||||
'User cancelled tool execution.',
|
'User cancelled tool execution.',
|
||||||
);
|
);
|
||||||
return;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (toolResult.error === undefined) {
|
if (toolResult.error === undefined) {
|
||||||
let content = toolResult.llmContent;
|
let content = toolResult.llmContent;
|
||||||
let outputFile: string | undefined = undefined;
|
let outputFile: string | undefined = undefined;
|
||||||
const contentLength =
|
const contentLength =
|
||||||
typeof content === 'string' ? content.length : undefined;
|
typeof content === 'string' ? content.length : undefined;
|
||||||
if (
|
if (
|
||||||
typeof content === 'string' &&
|
typeof content === 'string' &&
|
||||||
toolName === ShellTool.Name &&
|
toolName === ShellTool.Name &&
|
||||||
this.config.getEnableToolOutputTruncation() &&
|
this.config.getEnableToolOutputTruncation() &&
|
||||||
this.config.getTruncateToolOutputThreshold() > 0 &&
|
this.config.getTruncateToolOutputThreshold() > 0 &&
|
||||||
this.config.getTruncateToolOutputLines() > 0
|
this.config.getTruncateToolOutputLines() > 0
|
||||||
) {
|
) {
|
||||||
const originalContentLength = content.length;
|
const originalContentLength = content.length;
|
||||||
const threshold = this.config.getTruncateToolOutputThreshold();
|
const threshold = this.config.getTruncateToolOutputThreshold();
|
||||||
const lines = this.config.getTruncateToolOutputLines();
|
const lines = this.config.getTruncateToolOutputLines();
|
||||||
const truncatedResult = await truncateAndSaveToFile(
|
const truncatedResult = await truncateAndSaveToFile(
|
||||||
content,
|
|
||||||
callId,
|
|
||||||
this.config.storage.getProjectTempDir(),
|
|
||||||
threshold,
|
|
||||||
lines,
|
|
||||||
);
|
|
||||||
content = truncatedResult.content;
|
|
||||||
outputFile = truncatedResult.outputFile;
|
|
||||||
|
|
||||||
if (outputFile) {
|
|
||||||
logToolOutputTruncated(
|
|
||||||
this.config,
|
|
||||||
new ToolOutputTruncatedEvent(
|
|
||||||
scheduledCall.request.prompt_id,
|
|
||||||
{
|
|
||||||
toolName,
|
|
||||||
originalContentLength,
|
|
||||||
truncatedContentLength: content.length,
|
|
||||||
threshold,
|
|
||||||
lines,
|
|
||||||
},
|
|
||||||
),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const response = convertToFunctionResponse(
|
|
||||||
toolName,
|
|
||||||
callId,
|
|
||||||
content,
|
content,
|
||||||
);
|
|
||||||
const successResponse: ToolCallResponseInfo = {
|
|
||||||
callId,
|
callId,
|
||||||
responseParts: response,
|
this.config.storage.getProjectTempDir(),
|
||||||
resultDisplay: toolResult.returnDisplay,
|
threshold,
|
||||||
error: undefined,
|
lines,
|
||||||
errorType: undefined,
|
);
|
||||||
outputFile,
|
content = truncatedResult.content;
|
||||||
contentLength,
|
outputFile = truncatedResult.outputFile;
|
||||||
};
|
|
||||||
this.setStatusInternal(callId, 'success', successResponse);
|
if (outputFile) {
|
||||||
} else {
|
logToolOutputTruncated(
|
||||||
// It is a failure
|
this.config,
|
||||||
const error = new Error(toolResult.error.message);
|
new ToolOutputTruncatedEvent(
|
||||||
const errorResponse = createErrorResponse(
|
scheduledCall.request.prompt_id,
|
||||||
|
{
|
||||||
|
toolName,
|
||||||
|
originalContentLength,
|
||||||
|
truncatedContentLength: content.length,
|
||||||
|
threshold,
|
||||||
|
lines,
|
||||||
|
},
|
||||||
|
),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const response = convertToFunctionResponse(
|
||||||
|
toolName,
|
||||||
|
callId,
|
||||||
|
content,
|
||||||
|
);
|
||||||
|
const successResponse: ToolCallResponseInfo = {
|
||||||
|
callId,
|
||||||
|
responseParts: response,
|
||||||
|
resultDisplay: toolResult.returnDisplay,
|
||||||
|
error: undefined,
|
||||||
|
errorType: undefined,
|
||||||
|
outputFile,
|
||||||
|
contentLength,
|
||||||
|
};
|
||||||
|
this.setStatusInternal(callId, 'success', successResponse);
|
||||||
|
} else {
|
||||||
|
// It is a failure
|
||||||
|
const error = new Error(toolResult.error.message);
|
||||||
|
const errorResponse = createErrorResponse(
|
||||||
|
scheduledCall.request,
|
||||||
|
error,
|
||||||
|
toolResult.error.type,
|
||||||
|
);
|
||||||
|
this.setStatusInternal(callId, 'error', errorResponse);
|
||||||
|
}
|
||||||
|
} catch (executionError: unknown) {
|
||||||
|
if (signal.aborted) {
|
||||||
|
this.setStatusInternal(
|
||||||
|
callId,
|
||||||
|
'cancelled',
|
||||||
|
'User cancelled tool execution.',
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
this.setStatusInternal(
|
||||||
|
callId,
|
||||||
|
'error',
|
||||||
|
createErrorResponse(
|
||||||
scheduledCall.request,
|
scheduledCall.request,
|
||||||
error,
|
executionError instanceof Error
|
||||||
toolResult.error.type,
|
? executionError
|
||||||
);
|
: new Error(String(executionError)),
|
||||||
this.setStatusInternal(callId, 'error', errorResponse);
|
ToolErrorType.UNHANDLED_EXCEPTION,
|
||||||
}
|
),
|
||||||
})
|
);
|
||||||
.catch((executionError: Error) => {
|
}
|
||||||
if (signal.aborted) {
|
}
|
||||||
this.setStatusInternal(
|
}
|
||||||
callId,
|
|
||||||
'cancelled',
|
|
||||||
'User cancelled tool execution.',
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
this.setStatusInternal(
|
|
||||||
callId,
|
|
||||||
'error',
|
|
||||||
createErrorResponse(
|
|
||||||
scheduledCall.request,
|
|
||||||
executionError instanceof Error
|
|
||||||
? executionError
|
|
||||||
: new Error(String(executionError)),
|
|
||||||
ToolErrorType.UNHANDLED_EXCEPTION,
|
|
||||||
),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -23,8 +23,6 @@ import { setSimulate429 } from '../utils/testUtils.js';
|
|||||||
import { DEFAULT_GEMINI_FLASH_MODEL } from '../config/models.js';
|
import { DEFAULT_GEMINI_FLASH_MODEL } from '../config/models.js';
|
||||||
import { AuthType } from './contentGenerator.js';
|
import { AuthType } from './contentGenerator.js';
|
||||||
import { type RetryOptions } from '../utils/retry.js';
|
import { type RetryOptions } from '../utils/retry.js';
|
||||||
import type { ToolRegistry } from '../tools/tool-registry.js';
|
|
||||||
import { Kind } from '../tools/tools.js';
|
|
||||||
import { uiTelemetryService } from '../telemetry/uiTelemetry.js';
|
import { uiTelemetryService } from '../telemetry/uiTelemetry.js';
|
||||||
|
|
||||||
// Mock fs module to prevent actual file system operations during tests
|
// Mock fs module to prevent actual file system operations during tests
|
||||||
@@ -1305,259 +1303,6 @@ describe('GeminiChat', () => {
|
|||||||
expect(turn4.parts[0].text).toBe('second response');
|
expect(turn4.parts[0].text).toBe('second response');
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('stopBeforeSecondMutator', () => {
|
|
||||||
beforeEach(() => {
|
|
||||||
// Common setup for these tests: mock the tool registry.
|
|
||||||
const mockToolRegistry = {
|
|
||||||
getTool: vi.fn((toolName: string) => {
|
|
||||||
if (toolName === 'edit') {
|
|
||||||
return { kind: Kind.Edit };
|
|
||||||
}
|
|
||||||
return { kind: Kind.Other };
|
|
||||||
}),
|
|
||||||
} as unknown as ToolRegistry;
|
|
||||||
vi.mocked(mockConfig.getToolRegistry).mockReturnValue(mockToolRegistry);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should stop streaming before a second mutator tool call', async () => {
|
|
||||||
const responses = [
|
|
||||||
{
|
|
||||||
candidates: [
|
|
||||||
{ content: { role: 'model', parts: [{ text: 'First part. ' }] } },
|
|
||||||
],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
candidates: [
|
|
||||||
{
|
|
||||||
content: {
|
|
||||||
role: 'model',
|
|
||||||
parts: [{ functionCall: { name: 'edit', args: {} } }],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
candidates: [
|
|
||||||
{
|
|
||||||
content: {
|
|
||||||
role: 'model',
|
|
||||||
parts: [{ functionCall: { name: 'fetch', args: {} } }],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
// This chunk contains the second mutator and should be clipped.
|
|
||||||
{
|
|
||||||
candidates: [
|
|
||||||
{
|
|
||||||
content: {
|
|
||||||
role: 'model',
|
|
||||||
parts: [
|
|
||||||
{ functionCall: { name: 'edit', args: {} } },
|
|
||||||
{ text: 'some trailing text' },
|
|
||||||
],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
// This chunk should never be reached.
|
|
||||||
{
|
|
||||||
candidates: [
|
|
||||||
{
|
|
||||||
content: {
|
|
||||||
role: 'model',
|
|
||||||
parts: [{ text: 'This should not appear.' }],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
] as unknown as GenerateContentResponse[];
|
|
||||||
|
|
||||||
vi.mocked(mockContentGenerator.generateContentStream).mockResolvedValue(
|
|
||||||
(async function* () {
|
|
||||||
for (const response of responses) {
|
|
||||||
yield response;
|
|
||||||
}
|
|
||||||
})(),
|
|
||||||
);
|
|
||||||
|
|
||||||
const stream = await chat.sendMessageStream(
|
|
||||||
'test-model',
|
|
||||||
{ message: 'test message' },
|
|
||||||
'prompt-id-mutator-test',
|
|
||||||
);
|
|
||||||
for await (const _ of stream) {
|
|
||||||
// Consume the stream to trigger history recording.
|
|
||||||
}
|
|
||||||
|
|
||||||
const history = chat.getHistory();
|
|
||||||
expect(history.length).toBe(2);
|
|
||||||
|
|
||||||
const modelTurn = history[1]!;
|
|
||||||
expect(modelTurn.role).toBe('model');
|
|
||||||
expect(modelTurn?.parts?.length).toBe(3);
|
|
||||||
expect(modelTurn?.parts![0]!.text).toBe('First part. ');
|
|
||||||
expect(modelTurn.parts![1]!.functionCall?.name).toBe('edit');
|
|
||||||
expect(modelTurn.parts![2]!.functionCall?.name).toBe('fetch');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not stop streaming if only one mutator is present', async () => {
|
|
||||||
const responses = [
|
|
||||||
{
|
|
||||||
candidates: [
|
|
||||||
{ content: { role: 'model', parts: [{ text: 'Part 1. ' }] } },
|
|
||||||
],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
candidates: [
|
|
||||||
{
|
|
||||||
content: {
|
|
||||||
role: 'model',
|
|
||||||
parts: [{ functionCall: { name: 'edit', args: {} } }],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
candidates: [
|
|
||||||
{
|
|
||||||
content: {
|
|
||||||
role: 'model',
|
|
||||||
parts: [{ text: 'Part 2.' }],
|
|
||||||
},
|
|
||||||
finishReason: 'STOP',
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
] as unknown as GenerateContentResponse[];
|
|
||||||
|
|
||||||
vi.mocked(mockContentGenerator.generateContentStream).mockResolvedValue(
|
|
||||||
(async function* () {
|
|
||||||
for (const response of responses) {
|
|
||||||
yield response;
|
|
||||||
}
|
|
||||||
})(),
|
|
||||||
);
|
|
||||||
|
|
||||||
const stream = await chat.sendMessageStream(
|
|
||||||
'test-model',
|
|
||||||
{ message: 'test message' },
|
|
||||||
'prompt-id-one-mutator',
|
|
||||||
);
|
|
||||||
for await (const _ of stream) {
|
|
||||||
/* consume */
|
|
||||||
}
|
|
||||||
|
|
||||||
const history = chat.getHistory();
|
|
||||||
const modelTurn = history[1]!;
|
|
||||||
expect(modelTurn?.parts?.length).toBe(3);
|
|
||||||
expect(modelTurn.parts![1]!.functionCall?.name).toBe('edit');
|
|
||||||
expect(modelTurn.parts![2]!.text).toBe('Part 2.');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should clip the chunk containing the second mutator, preserving prior parts', async () => {
|
|
||||||
const responses = [
|
|
||||||
{
|
|
||||||
candidates: [
|
|
||||||
{
|
|
||||||
content: {
|
|
||||||
role: 'model',
|
|
||||||
parts: [{ functionCall: { name: 'edit', args: {} } }],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
// This chunk has a valid part before the second mutator.
|
|
||||||
// The valid part should be kept, the rest of the chunk discarded.
|
|
||||||
{
|
|
||||||
candidates: [
|
|
||||||
{
|
|
||||||
content: {
|
|
||||||
role: 'model',
|
|
||||||
parts: [
|
|
||||||
{ text: 'Keep this text. ' },
|
|
||||||
{ functionCall: { name: 'edit', args: {} } },
|
|
||||||
{ text: 'Discard this text.' },
|
|
||||||
],
|
|
||||||
},
|
|
||||||
finishReason: 'STOP',
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
] as unknown as GenerateContentResponse[];
|
|
||||||
|
|
||||||
const stream = (async function* () {
|
|
||||||
for (const response of responses) {
|
|
||||||
yield response;
|
|
||||||
}
|
|
||||||
})();
|
|
||||||
|
|
||||||
vi.mocked(mockContentGenerator.generateContentStream).mockResolvedValue(
|
|
||||||
stream,
|
|
||||||
);
|
|
||||||
|
|
||||||
const resultStream = await chat.sendMessageStream(
|
|
||||||
'test-model',
|
|
||||||
{ message: 'test' },
|
|
||||||
'prompt-id-clip-chunk',
|
|
||||||
);
|
|
||||||
for await (const _ of resultStream) {
|
|
||||||
/* consume */
|
|
||||||
}
|
|
||||||
|
|
||||||
const history = chat.getHistory();
|
|
||||||
const modelTurn = history[1]!;
|
|
||||||
expect(modelTurn?.parts?.length).toBe(2);
|
|
||||||
expect(modelTurn.parts![0]!.functionCall?.name).toBe('edit');
|
|
||||||
expect(modelTurn.parts![1]!.text).toBe('Keep this text. ');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should handle two mutators in the same chunk (parallel call scenario)', async () => {
|
|
||||||
const responses = [
|
|
||||||
{
|
|
||||||
candidates: [
|
|
||||||
{
|
|
||||||
content: {
|
|
||||||
role: 'model',
|
|
||||||
parts: [
|
|
||||||
{ text: 'Some text. ' },
|
|
||||||
{ functionCall: { name: 'edit', args: {} } },
|
|
||||||
{ functionCall: { name: 'edit', args: {} } },
|
|
||||||
],
|
|
||||||
},
|
|
||||||
finishReason: 'STOP',
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
] as unknown as GenerateContentResponse[];
|
|
||||||
|
|
||||||
const stream = (async function* () {
|
|
||||||
for (const response of responses) {
|
|
||||||
yield response;
|
|
||||||
}
|
|
||||||
})();
|
|
||||||
|
|
||||||
vi.mocked(mockContentGenerator.generateContentStream).mockResolvedValue(
|
|
||||||
stream,
|
|
||||||
);
|
|
||||||
|
|
||||||
const resultStream = await chat.sendMessageStream(
|
|
||||||
'test-model',
|
|
||||||
{ message: 'test' },
|
|
||||||
'prompt-id-parallel-mutators',
|
|
||||||
);
|
|
||||||
for await (const _ of resultStream) {
|
|
||||||
/* consume */
|
|
||||||
}
|
|
||||||
|
|
||||||
const history = chat.getHistory();
|
|
||||||
const modelTurn = history[1]!;
|
|
||||||
expect(modelTurn?.parts?.length).toBe(2);
|
|
||||||
expect(modelTurn.parts![0]!.text).toBe('Some text. ');
|
|
||||||
expect(modelTurn.parts![1]!.functionCall?.name).toBe('edit');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('Model Resolution', () => {
|
describe('Model Resolution', () => {
|
||||||
const mockResponse = {
|
const mockResponse = {
|
||||||
candidates: [
|
candidates: [
|
||||||
|
|||||||
@@ -7,16 +7,15 @@
|
|||||||
// DISCLAIMER: This is a copied version of https://github.com/googleapis/js-genai/blob/main/src/chats.ts with the intention of working around a key bug
|
// DISCLAIMER: This is a copied version of https://github.com/googleapis/js-genai/blob/main/src/chats.ts with the intention of working around a key bug
|
||||||
// where function responses are not treated as "valid" responses: https://b.corp.google.com/issues/420354090
|
// where function responses are not treated as "valid" responses: https://b.corp.google.com/issues/420354090
|
||||||
|
|
||||||
import {
|
import type {
|
||||||
GenerateContentResponse,
|
GenerateContentResponse,
|
||||||
type Content,
|
Content,
|
||||||
type GenerateContentConfig,
|
GenerateContentConfig,
|
||||||
type SendMessageParameters,
|
SendMessageParameters,
|
||||||
type Part,
|
Part,
|
||||||
type Tool,
|
Tool,
|
||||||
FinishReason,
|
|
||||||
ApiError,
|
|
||||||
} from '@google/genai';
|
} from '@google/genai';
|
||||||
|
import { ApiError } from '@google/genai';
|
||||||
import { toParts } from '../code_assist/converter.js';
|
import { toParts } from '../code_assist/converter.js';
|
||||||
import { createUserContent } from '@google/genai';
|
import { createUserContent } from '@google/genai';
|
||||||
import { retryWithBackoff } from '../utils/retry.js';
|
import { retryWithBackoff } from '../utils/retry.js';
|
||||||
@@ -25,7 +24,7 @@ import {
|
|||||||
DEFAULT_GEMINI_FLASH_MODEL,
|
DEFAULT_GEMINI_FLASH_MODEL,
|
||||||
getEffectiveModel,
|
getEffectiveModel,
|
||||||
} from '../config/models.js';
|
} from '../config/models.js';
|
||||||
import { hasCycleInSchema, MUTATOR_KINDS } from '../tools/tools.js';
|
import { hasCycleInSchema } from '../tools/tools.js';
|
||||||
import type { StructuredError } from './turn.js';
|
import type { StructuredError } from './turn.js';
|
||||||
import {
|
import {
|
||||||
logContentRetry,
|
logContentRetry,
|
||||||
@@ -511,7 +510,7 @@ export class GeminiChat {
|
|||||||
let hasToolCall = false;
|
let hasToolCall = false;
|
||||||
let hasFinishReason = false;
|
let hasFinishReason = false;
|
||||||
|
|
||||||
for await (const chunk of this.stopBeforeSecondMutator(streamResponse)) {
|
for await (const chunk of streamResponse) {
|
||||||
hasFinishReason =
|
hasFinishReason =
|
||||||
chunk?.candidates?.some((candidate) => candidate.finishReason) ?? false;
|
chunk?.candidates?.some((candidate) => candidate.finishReason) ?? false;
|
||||||
if (isValidResponse(chunk)) {
|
if (isValidResponse(chunk)) {
|
||||||
@@ -629,64 +628,6 @@ export class GeminiChat {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Truncates the chunkStream right before the second function call to a
|
|
||||||
* function that mutates state. This may involve trimming parts from a chunk
|
|
||||||
* as well as omtting some chunks altogether.
|
|
||||||
*
|
|
||||||
* We do this because it improves tool call quality if the model gets
|
|
||||||
* feedback from one mutating function call before it makes the next one.
|
|
||||||
*/
|
|
||||||
private async *stopBeforeSecondMutator(
|
|
||||||
chunkStream: AsyncGenerator<GenerateContentResponse>,
|
|
||||||
): AsyncGenerator<GenerateContentResponse> {
|
|
||||||
let foundMutatorFunctionCall = false;
|
|
||||||
|
|
||||||
for await (const chunk of chunkStream) {
|
|
||||||
const candidate = chunk.candidates?.[0];
|
|
||||||
const content = candidate?.content;
|
|
||||||
if (!candidate || !content?.parts) {
|
|
||||||
yield chunk;
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
const truncatedParts: Part[] = [];
|
|
||||||
for (const part of content.parts) {
|
|
||||||
if (this.isMutatorFunctionCall(part)) {
|
|
||||||
if (foundMutatorFunctionCall) {
|
|
||||||
// This is the second mutator call.
|
|
||||||
// Truncate and return immedaitely.
|
|
||||||
const newChunk = new GenerateContentResponse();
|
|
||||||
newChunk.candidates = [
|
|
||||||
{
|
|
||||||
...candidate,
|
|
||||||
content: {
|
|
||||||
...content,
|
|
||||||
parts: truncatedParts,
|
|
||||||
},
|
|
||||||
finishReason: FinishReason.STOP,
|
|
||||||
},
|
|
||||||
];
|
|
||||||
yield newChunk;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
foundMutatorFunctionCall = true;
|
|
||||||
}
|
|
||||||
truncatedParts.push(part);
|
|
||||||
}
|
|
||||||
|
|
||||||
yield chunk;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private isMutatorFunctionCall(part: Part): boolean {
|
|
||||||
if (!part?.functionCall?.name) {
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
const tool = this.config.getToolRegistry().getTool(part.functionCall.name);
|
|
||||||
return !!tool && MUTATOR_KINDS.includes(tool.kind);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Visible for Testing */
|
/** Visible for Testing */
|
||||||
|
|||||||
@@ -1,2 +1,8 @@
|
|||||||
export const DEFAULT_TIMEOUT = 120000;
|
export const DEFAULT_TIMEOUT = 120000;
|
||||||
export const DEFAULT_MAX_RETRIES = 3;
|
export const DEFAULT_MAX_RETRIES = 3;
|
||||||
|
|
||||||
|
export const DEFAULT_OPENAI_BASE_URL = 'https://api.openai.com/v1';
|
||||||
|
export const DEFAULT_DASHSCOPE_BASE_URL =
|
||||||
|
'https://dashscope.aliyuncs.com/compatible-mode/v1';
|
||||||
|
export const DEFAULT_DEEPSEEK_BASE_URL = 'https://api.deepseek.com/v1';
|
||||||
|
export const DEFAULT_OPEN_ROUTER_BASE_URL = 'https://openrouter.ai/api/v1';
|
||||||
|
|||||||
@@ -32,6 +32,7 @@ export class OpenAIContentGenerator implements ContentGenerator {
|
|||||||
telemetryService: new DefaultTelemetryService(
|
telemetryService: new DefaultTelemetryService(
|
||||||
cliConfig,
|
cliConfig,
|
||||||
contentGeneratorConfig.enableOpenAILogging,
|
contentGeneratorConfig.enableOpenAILogging,
|
||||||
|
contentGeneratorConfig.openAILoggingDir,
|
||||||
),
|
),
|
||||||
errorHandler: new EnhancedErrorHandler(
|
errorHandler: new EnhancedErrorHandler(
|
||||||
(error: unknown, request: GenerateContentParameters) =>
|
(error: unknown, request: GenerateContentParameters) =>
|
||||||
|
|||||||
@@ -2,7 +2,11 @@ import OpenAI from 'openai';
|
|||||||
import type { Config } from '../../../config/config.js';
|
import type { Config } from '../../../config/config.js';
|
||||||
import type { ContentGeneratorConfig } from '../../contentGenerator.js';
|
import type { ContentGeneratorConfig } from '../../contentGenerator.js';
|
||||||
import { AuthType } from '../../contentGenerator.js';
|
import { AuthType } from '../../contentGenerator.js';
|
||||||
import { DEFAULT_TIMEOUT, DEFAULT_MAX_RETRIES } from '../constants.js';
|
import {
|
||||||
|
DEFAULT_TIMEOUT,
|
||||||
|
DEFAULT_MAX_RETRIES,
|
||||||
|
DEFAULT_DASHSCOPE_BASE_URL,
|
||||||
|
} from '../constants.js';
|
||||||
import { tokenLimit } from '../../tokenLimits.js';
|
import { tokenLimit } from '../../tokenLimits.js';
|
||||||
import type {
|
import type {
|
||||||
OpenAICompatibleProvider,
|
OpenAICompatibleProvider,
|
||||||
@@ -53,7 +57,7 @@ export class DashScopeOpenAICompatibleProvider
|
|||||||
buildClient(): OpenAI {
|
buildClient(): OpenAI {
|
||||||
const {
|
const {
|
||||||
apiKey,
|
apiKey,
|
||||||
baseUrl,
|
baseUrl = DEFAULT_DASHSCOPE_BASE_URL,
|
||||||
timeout = DEFAULT_TIMEOUT,
|
timeout = DEFAULT_TIMEOUT,
|
||||||
maxRetries = DEFAULT_MAX_RETRIES,
|
maxRetries = DEFAULT_MAX_RETRIES,
|
||||||
} = this.contentGeneratorConfig;
|
} = this.contentGeneratorConfig;
|
||||||
|
|||||||
@@ -7,7 +7,7 @@
|
|||||||
import type { Config } from '../../config/config.js';
|
import type { Config } from '../../config/config.js';
|
||||||
import { logApiError, logApiResponse } from '../../telemetry/loggers.js';
|
import { logApiError, logApiResponse } from '../../telemetry/loggers.js';
|
||||||
import { ApiErrorEvent, ApiResponseEvent } from '../../telemetry/types.js';
|
import { ApiErrorEvent, ApiResponseEvent } from '../../telemetry/types.js';
|
||||||
import { openaiLogger } from '../../utils/openaiLogger.js';
|
import { OpenAILogger } from '../../utils/openaiLogger.js';
|
||||||
import type { GenerateContentResponse } from '@google/genai';
|
import type { GenerateContentResponse } from '@google/genai';
|
||||||
import type OpenAI from 'openai';
|
import type OpenAI from 'openai';
|
||||||
|
|
||||||
@@ -43,10 +43,17 @@ export interface TelemetryService {
|
|||||||
}
|
}
|
||||||
|
|
||||||
export class DefaultTelemetryService implements TelemetryService {
|
export class DefaultTelemetryService implements TelemetryService {
|
||||||
|
private logger: OpenAILogger;
|
||||||
|
|
||||||
constructor(
|
constructor(
|
||||||
private config: Config,
|
private config: Config,
|
||||||
private enableOpenAILogging: boolean = false,
|
private enableOpenAILogging: boolean = false,
|
||||||
) {}
|
openAILoggingDir?: string,
|
||||||
|
) {
|
||||||
|
// Always create a new logger instance to ensure correct working directory
|
||||||
|
// If no custom directory is provided, undefined will use the default path
|
||||||
|
this.logger = new OpenAILogger(openAILoggingDir);
|
||||||
|
}
|
||||||
|
|
||||||
async logSuccess(
|
async logSuccess(
|
||||||
context: RequestContext,
|
context: RequestContext,
|
||||||
@@ -68,7 +75,7 @@ export class DefaultTelemetryService implements TelemetryService {
|
|||||||
|
|
||||||
// Log interaction if enabled
|
// Log interaction if enabled
|
||||||
if (this.enableOpenAILogging && openaiRequest && openaiResponse) {
|
if (this.enableOpenAILogging && openaiRequest && openaiResponse) {
|
||||||
await openaiLogger.logInteraction(openaiRequest, openaiResponse);
|
await this.logger.logInteraction(openaiRequest, openaiResponse);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -97,7 +104,7 @@ export class DefaultTelemetryService implements TelemetryService {
|
|||||||
|
|
||||||
// Log error interaction if enabled
|
// Log error interaction if enabled
|
||||||
if (this.enableOpenAILogging && openaiRequest) {
|
if (this.enableOpenAILogging && openaiRequest) {
|
||||||
await openaiLogger.logInteraction(
|
await this.logger.logInteraction(
|
||||||
openaiRequest,
|
openaiRequest,
|
||||||
undefined,
|
undefined,
|
||||||
error as Error,
|
error as Error,
|
||||||
@@ -137,7 +144,7 @@ export class DefaultTelemetryService implements TelemetryService {
|
|||||||
openaiChunks.length > 0
|
openaiChunks.length > 0
|
||||||
) {
|
) {
|
||||||
const combinedResponse = this.combineOpenAIChunksForLogging(openaiChunks);
|
const combinedResponse = this.combineOpenAIChunksForLogging(openaiChunks);
|
||||||
await openaiLogger.logInteraction(openaiRequest, combinedResponse);
|
await this.logger.logInteraction(openaiRequest, combinedResponse);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -64,6 +64,12 @@ describe('normalize', () => {
|
|||||||
expect(normalize('qwen-vl-max-latest')).toBe('qwen-vl-max-latest');
|
expect(normalize('qwen-vl-max-latest')).toBe('qwen-vl-max-latest');
|
||||||
});
|
});
|
||||||
|
|
||||||
|
it('should preserve date suffixes for Kimi K2 models', () => {
|
||||||
|
expect(normalize('kimi-k2-0905-preview')).toBe('kimi-k2-0905');
|
||||||
|
expect(normalize('kimi-k2-0711-preview')).toBe('kimi-k2-0711');
|
||||||
|
expect(normalize('kimi-k2-turbo-preview')).toBe('kimi-k2-turbo');
|
||||||
|
});
|
||||||
|
|
||||||
it('should remove date like suffixes', () => {
|
it('should remove date like suffixes', () => {
|
||||||
expect(normalize('deepseek-r1-0528')).toBe('deepseek-r1');
|
expect(normalize('deepseek-r1-0528')).toBe('deepseek-r1');
|
||||||
});
|
});
|
||||||
@@ -213,7 +219,7 @@ describe('tokenLimit', () => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('Other models', () => {
|
describe('DeepSeek', () => {
|
||||||
it('should return the correct limit for deepseek-r1', () => {
|
it('should return the correct limit for deepseek-r1', () => {
|
||||||
expect(tokenLimit('deepseek-r1')).toBe(131072);
|
expect(tokenLimit('deepseek-r1')).toBe(131072);
|
||||||
});
|
});
|
||||||
@@ -226,9 +232,27 @@ describe('tokenLimit', () => {
|
|||||||
it('should return the correct limit for deepseek-v3.2', () => {
|
it('should return the correct limit for deepseek-v3.2', () => {
|
||||||
expect(tokenLimit('deepseek-v3.2-exp')).toBe(131072);
|
expect(tokenLimit('deepseek-v3.2-exp')).toBe(131072);
|
||||||
});
|
});
|
||||||
it('should return the correct limit for kimi-k2-instruct', () => {
|
});
|
||||||
expect(tokenLimit('kimi-k2-instruct')).toBe(131072);
|
|
||||||
|
describe('Moonshot Kimi', () => {
|
||||||
|
it('should return the correct limit for kimi-k2-0905-preview', () => {
|
||||||
|
expect(tokenLimit('kimi-k2-0905-preview')).toBe(262144); // 256K
|
||||||
|
expect(tokenLimit('kimi-k2-0905')).toBe(262144);
|
||||||
});
|
});
|
||||||
|
it('should return the correct limit for kimi-k2-turbo-preview', () => {
|
||||||
|
expect(tokenLimit('kimi-k2-turbo-preview')).toBe(262144); // 256K
|
||||||
|
expect(tokenLimit('kimi-k2-turbo')).toBe(262144);
|
||||||
|
});
|
||||||
|
it('should return the correct limit for kimi-k2-0711-preview', () => {
|
||||||
|
expect(tokenLimit('kimi-k2-0711-preview')).toBe(131072); // 128K
|
||||||
|
expect(tokenLimit('kimi-k2-0711')).toBe(131072);
|
||||||
|
});
|
||||||
|
it('should return the correct limit for kimi-k2-instruct', () => {
|
||||||
|
expect(tokenLimit('kimi-k2-instruct')).toBe(131072); // 128K
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Other models', () => {
|
||||||
it('should return the correct limit for gpt-oss', () => {
|
it('should return the correct limit for gpt-oss', () => {
|
||||||
expect(tokenLimit('gpt-oss')).toBe(131072);
|
expect(tokenLimit('gpt-oss')).toBe(131072);
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -47,8 +47,13 @@ export function normalize(model: string): string {
|
|||||||
// remove trailing build / date / revision suffixes:
|
// remove trailing build / date / revision suffixes:
|
||||||
// - dates (e.g., -20250219), -v1, version numbers, 'latest', 'preview' etc.
|
// - dates (e.g., -20250219), -v1, version numbers, 'latest', 'preview' etc.
|
||||||
s = s.replace(/-preview/g, '');
|
s = s.replace(/-preview/g, '');
|
||||||
// Special handling for Qwen model names that include "-latest" as part of the model name
|
// Special handling for model names that include date/version as part of the model identifier
|
||||||
if (!s.match(/^qwen-(?:plus|flash|vl-max)-latest$/)) {
|
// - Qwen models: qwen-plus-latest, qwen-flash-latest, qwen-vl-max-latest
|
||||||
|
// - Kimi models: kimi-k2-0905, kimi-k2-0711, etc. (keep date for version distinction)
|
||||||
|
if (
|
||||||
|
!s.match(/^qwen-(?:plus|flash|vl-max)-latest$/) &&
|
||||||
|
!s.match(/^kimi-k2-\d{4}$/)
|
||||||
|
) {
|
||||||
// Regex breakdown:
|
// Regex breakdown:
|
||||||
// -(?:...)$ - Non-capturing group for suffixes at the end of the string
|
// -(?:...)$ - Non-capturing group for suffixes at the end of the string
|
||||||
// The following patterns are matched within the group:
|
// The following patterns are matched within the group:
|
||||||
@@ -160,14 +165,19 @@ const PATTERNS: Array<[RegExp, TokenCount]> = [
|
|||||||
// -------------------
|
// -------------------
|
||||||
// DeepSeek
|
// DeepSeek
|
||||||
// -------------------
|
// -------------------
|
||||||
[/^deepseek$/, LIMITS['128k']],
|
[/^deepseek(?:-.*)?$/, LIMITS['128k']],
|
||||||
[/^deepseek-r1(?:-.*)?$/, LIMITS['128k']],
|
|
||||||
[/^deepseek-v3(?:\.\d+)?(?:-.*)?$/, LIMITS['128k']],
|
|
||||||
|
|
||||||
// -------------------
|
// -------------------
|
||||||
// GPT-OSS / Kimi / Llama & Mistral examples
|
// Moonshot / Kimi
|
||||||
|
// -------------------
|
||||||
|
[/^kimi-k2-0905$/, LIMITS['256k']], // Kimi-k2-0905-preview: 256K context
|
||||||
|
[/^kimi-k2-turbo.*$/, LIMITS['256k']], // Kimi-k2-turbo-preview: 256K context
|
||||||
|
[/^kimi-k2-0711$/, LIMITS['128k']], // Kimi-k2-0711-preview: 128K context
|
||||||
|
[/^kimi-k2-instruct.*$/, LIMITS['128k']], // Kimi-k2-instruct: 128K context
|
||||||
|
|
||||||
|
// -------------------
|
||||||
|
// GPT-OSS / Llama & Mistral examples
|
||||||
// -------------------
|
// -------------------
|
||||||
[/^kimi-k2-instruct.*$/, LIMITS['128k']],
|
|
||||||
[/^gpt-oss.*$/, LIMITS['128k']],
|
[/^gpt-oss.*$/, LIMITS['128k']],
|
||||||
[/^llama-4-scout.*$/, LIMITS['10m']],
|
[/^llama-4-scout.*$/, LIMITS['10m']],
|
||||||
[/^mistral-large-2.*$/, LIMITS['128k']],
|
[/^mistral-large-2.*$/, LIMITS['128k']],
|
||||||
@@ -199,6 +209,12 @@ const OUTPUT_PATTERNS: Array<[RegExp, TokenCount]> = [
|
|||||||
|
|
||||||
// Qwen3-VL-Plus: 32K max output tokens
|
// Qwen3-VL-Plus: 32K max output tokens
|
||||||
[/^qwen3-vl-plus$/, LIMITS['32k']],
|
[/^qwen3-vl-plus$/, LIMITS['32k']],
|
||||||
|
|
||||||
|
// Deepseek-chat: 8k max tokens
|
||||||
|
[/^deepseek-chat$/, LIMITS['8k']],
|
||||||
|
|
||||||
|
// Deepseek-reasoner: 64k max tokens
|
||||||
|
[/^deepseek-reasoner$/, LIMITS['64k']],
|
||||||
];
|
];
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -153,6 +153,9 @@ export enum CompressionStatus {
|
|||||||
/** The compression failed due to an error counting tokens */
|
/** The compression failed due to an error counting tokens */
|
||||||
COMPRESSION_FAILED_TOKEN_COUNT_ERROR,
|
COMPRESSION_FAILED_TOKEN_COUNT_ERROR,
|
||||||
|
|
||||||
|
/** The compression failed due to receiving an empty or null summary */
|
||||||
|
COMPRESSION_FAILED_EMPTY_SUMMARY,
|
||||||
|
|
||||||
/** The compression was not necessary and no action was taken */
|
/** The compression was not necessary and no action was taken */
|
||||||
NOOP,
|
NOOP,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -60,7 +60,10 @@ function verifyVSCode(
|
|||||||
if (ide.name !== IDE_DEFINITIONS.vscode.name) {
|
if (ide.name !== IDE_DEFINITIONS.vscode.name) {
|
||||||
return ide;
|
return ide;
|
||||||
}
|
}
|
||||||
if (ideProcessInfo.command.toLowerCase().includes('code')) {
|
if (
|
||||||
|
ideProcessInfo.command &&
|
||||||
|
ideProcessInfo.command.toLowerCase().includes('code')
|
||||||
|
) {
|
||||||
return IDE_DEFINITIONS.vscode;
|
return IDE_DEFINITIONS.vscode;
|
||||||
}
|
}
|
||||||
return IDE_DEFINITIONS.vscodefork;
|
return IDE_DEFINITIONS.vscodefork;
|
||||||
|
|||||||
@@ -113,7 +113,7 @@ describe('IdeClient', () => {
|
|||||||
'utf8',
|
'utf8',
|
||||||
);
|
);
|
||||||
expect(StreamableHTTPClientTransport).toHaveBeenCalledWith(
|
expect(StreamableHTTPClientTransport).toHaveBeenCalledWith(
|
||||||
new URL('http://localhost:8080/mcp'),
|
new URL('http://127.0.0.1:8080/mcp'),
|
||||||
expect.any(Object),
|
expect.any(Object),
|
||||||
);
|
);
|
||||||
expect(mockClient.connect).toHaveBeenCalledWith(mockHttpTransport);
|
expect(mockClient.connect).toHaveBeenCalledWith(mockHttpTransport);
|
||||||
@@ -181,7 +181,7 @@ describe('IdeClient', () => {
|
|||||||
await ideClient.connect();
|
await ideClient.connect();
|
||||||
|
|
||||||
expect(StreamableHTTPClientTransport).toHaveBeenCalledWith(
|
expect(StreamableHTTPClientTransport).toHaveBeenCalledWith(
|
||||||
new URL('http://localhost:9090/mcp'),
|
new URL('http://127.0.0.1:9090/mcp'),
|
||||||
expect.any(Object),
|
expect.any(Object),
|
||||||
);
|
);
|
||||||
expect(mockClient.connect).toHaveBeenCalledWith(mockHttpTransport);
|
expect(mockClient.connect).toHaveBeenCalledWith(mockHttpTransport);
|
||||||
@@ -230,7 +230,7 @@ describe('IdeClient', () => {
|
|||||||
await ideClient.connect();
|
await ideClient.connect();
|
||||||
|
|
||||||
expect(StreamableHTTPClientTransport).toHaveBeenCalledWith(
|
expect(StreamableHTTPClientTransport).toHaveBeenCalledWith(
|
||||||
new URL('http://localhost:8080/mcp'),
|
new URL('http://127.0.0.1:8080/mcp'),
|
||||||
expect.any(Object),
|
expect.any(Object),
|
||||||
);
|
);
|
||||||
expect(ideClient.getConnectionStatus().status).toBe(
|
expect(ideClient.getConnectionStatus().status).toBe(
|
||||||
@@ -665,7 +665,7 @@ describe('IdeClient', () => {
|
|||||||
await ideClient.connect();
|
await ideClient.connect();
|
||||||
|
|
||||||
expect(StreamableHTTPClientTransport).toHaveBeenCalledWith(
|
expect(StreamableHTTPClientTransport).toHaveBeenCalledWith(
|
||||||
new URL('http://localhost:8080/mcp'),
|
new URL('http://127.0.0.1:8080/mcp'),
|
||||||
expect.objectContaining({
|
expect.objectContaining({
|
||||||
requestInit: {
|
requestInit: {
|
||||||
headers: {
|
headers: {
|
||||||
|
|||||||
@@ -667,10 +667,10 @@ export class IdeClient {
|
|||||||
}
|
}
|
||||||
|
|
||||||
private createProxyAwareFetch() {
|
private createProxyAwareFetch() {
|
||||||
// ignore proxy for 'localhost' by deafult to allow connecting to the ide mcp server
|
// ignore proxy for '127.0.0.1' by deafult to allow connecting to the ide mcp server
|
||||||
const existingNoProxy = process.env['NO_PROXY'] || '';
|
const existingNoProxy = process.env['NO_PROXY'] || '';
|
||||||
const agent = new EnvHttpProxyAgent({
|
const agent = new EnvHttpProxyAgent({
|
||||||
noProxy: [existingNoProxy, 'localhost'].filter(Boolean).join(','),
|
noProxy: [existingNoProxy, '127.0.0.1'].filter(Boolean).join(','),
|
||||||
});
|
});
|
||||||
const undiciPromise = import('undici');
|
const undiciPromise = import('undici');
|
||||||
return async (url: string | URL, init?: RequestInit): Promise<Response> => {
|
return async (url: string | URL, init?: RequestInit): Promise<Response> => {
|
||||||
@@ -851,5 +851,5 @@ export class IdeClient {
|
|||||||
function getIdeServerHost() {
|
function getIdeServerHost() {
|
||||||
const isInContainer =
|
const isInContainer =
|
||||||
fs.existsSync('/.dockerenv') || fs.existsSync('/run/.containerenv');
|
fs.existsSync('/.dockerenv') || fs.existsSync('/run/.containerenv');
|
||||||
return isInContainer ? 'host.docker.internal' : 'localhost';
|
return isInContainer ? 'host.docker.internal' : '127.0.0.1';
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -112,14 +112,19 @@ describe('ide-installer', () => {
|
|||||||
platform: 'linux',
|
platform: 'linux',
|
||||||
});
|
});
|
||||||
await installer.install();
|
await installer.install();
|
||||||
|
|
||||||
|
// Note: The implementation uses process.platform, not the mocked platform
|
||||||
|
const isActuallyWindows = process.platform === 'win32';
|
||||||
|
const expectedCommand = isActuallyWindows ? '"code"' : 'code';
|
||||||
|
|
||||||
expect(child_process.spawnSync).toHaveBeenCalledWith(
|
expect(child_process.spawnSync).toHaveBeenCalledWith(
|
||||||
'code',
|
expectedCommand,
|
||||||
[
|
[
|
||||||
'--install-extension',
|
'--install-extension',
|
||||||
'qwenlm.qwen-code-vscode-ide-companion',
|
'qwenlm.qwen-code-vscode-ide-companion',
|
||||||
'--force',
|
'--force',
|
||||||
],
|
],
|
||||||
{ stdio: 'pipe' },
|
{ stdio: 'pipe', shell: isActuallyWindows },
|
||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@@ -117,15 +117,16 @@ class VsCodeInstaller implements IdeInstaller {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const isWindows = process.platform === 'win32';
|
||||||
try {
|
try {
|
||||||
const result = child_process.spawnSync(
|
const result = child_process.spawnSync(
|
||||||
commandPath,
|
isWindows ? `"${commandPath}"` : commandPath,
|
||||||
[
|
[
|
||||||
'--install-extension',
|
'--install-extension',
|
||||||
'qwenlm.qwen-code-vscode-ide-companion',
|
'qwenlm.qwen-code-vscode-ide-companion',
|
||||||
'--force',
|
'--force',
|
||||||
],
|
],
|
||||||
{ stdio: 'pipe' },
|
{ stdio: 'pipe', shell: isWindows },
|
||||||
);
|
);
|
||||||
|
|
||||||
if (result.status !== 0) {
|
if (result.status !== 0) {
|
||||||
|
|||||||
@@ -48,6 +48,7 @@ export * from './utils/systemEncoding.js';
|
|||||||
export * from './utils/textUtils.js';
|
export * from './utils/textUtils.js';
|
||||||
export * from './utils/formatters.js';
|
export * from './utils/formatters.js';
|
||||||
export * from './utils/generateContentResponseUtilities.js';
|
export * from './utils/generateContentResponseUtilities.js';
|
||||||
|
export * from './utils/ripgrepUtils.js';
|
||||||
export * from './utils/filesearch/fileSearch.js';
|
export * from './utils/filesearch/fileSearch.js';
|
||||||
export * from './utils/errorParsing.js';
|
export * from './utils/errorParsing.js';
|
||||||
export * from './utils/workspaceContext.js';
|
export * from './utils/workspaceContext.js';
|
||||||
@@ -97,10 +98,12 @@ export * from './tools/write-file.js';
|
|||||||
export * from './tools/web-fetch.js';
|
export * from './tools/web-fetch.js';
|
||||||
export * from './tools/memoryTool.js';
|
export * from './tools/memoryTool.js';
|
||||||
export * from './tools/shell.js';
|
export * from './tools/shell.js';
|
||||||
export * from './tools/web-search.js';
|
export * from './tools/web-search/index.js';
|
||||||
export * from './tools/read-many-files.js';
|
export * from './tools/read-many-files.js';
|
||||||
export * from './tools/mcp-client.js';
|
export * from './tools/mcp-client.js';
|
||||||
export * from './tools/mcp-tool.js';
|
export * from './tools/mcp-tool.js';
|
||||||
|
export * from './tools/task.js';
|
||||||
|
export * from './tools/todoWrite.js';
|
||||||
|
|
||||||
// MCP OAuth
|
// MCP OAuth
|
||||||
export { MCPOAuthProvider } from './mcp/oauth-provider.js';
|
export { MCPOAuthProvider } from './mcp/oauth-provider.js';
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ import { OpenAIContentGenerator } from '../core/openaiContentGenerator/index.js'
|
|||||||
import { DashScopeOpenAICompatibleProvider } from '../core/openaiContentGenerator/provider/dashscope.js';
|
import { DashScopeOpenAICompatibleProvider } from '../core/openaiContentGenerator/provider/dashscope.js';
|
||||||
import type { IQwenOAuth2Client } from './qwenOAuth2.js';
|
import type { IQwenOAuth2Client } from './qwenOAuth2.js';
|
||||||
import { SharedTokenManager } from './sharedTokenManager.js';
|
import { SharedTokenManager } from './sharedTokenManager.js';
|
||||||
import type { Config } from '../config/config.js';
|
import { type Config } from '../config/config.js';
|
||||||
import type {
|
import type {
|
||||||
GenerateContentParameters,
|
GenerateContentParameters,
|
||||||
GenerateContentResponse,
|
GenerateContentResponse,
|
||||||
@@ -18,10 +18,7 @@ import type {
|
|||||||
EmbedContentResponse,
|
EmbedContentResponse,
|
||||||
} from '@google/genai';
|
} from '@google/genai';
|
||||||
import type { ContentGeneratorConfig } from '../core/contentGenerator.js';
|
import type { ContentGeneratorConfig } from '../core/contentGenerator.js';
|
||||||
|
import { DEFAULT_DASHSCOPE_BASE_URL } from '../core/openaiContentGenerator/constants.js';
|
||||||
// Default fallback base URL if no endpoint is provided
|
|
||||||
const DEFAULT_QWEN_BASE_URL =
|
|
||||||
'https://dashscope.aliyuncs.com/compatible-mode/v1';
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Qwen Content Generator that uses Qwen OAuth tokens with automatic refresh
|
* Qwen Content Generator that uses Qwen OAuth tokens with automatic refresh
|
||||||
@@ -58,7 +55,7 @@ export class QwenContentGenerator extends OpenAIContentGenerator {
|
|||||||
* Get the current endpoint URL with proper protocol and /v1 suffix
|
* Get the current endpoint URL with proper protocol and /v1 suffix
|
||||||
*/
|
*/
|
||||||
private getCurrentEndpoint(resourceUrl?: string): string {
|
private getCurrentEndpoint(resourceUrl?: string): string {
|
||||||
const baseEndpoint = resourceUrl || DEFAULT_QWEN_BASE_URL;
|
const baseEndpoint = resourceUrl || DEFAULT_DASHSCOPE_BASE_URL;
|
||||||
const suffix = '/v1';
|
const suffix = '/v1';
|
||||||
|
|
||||||
// Normalize the URL: add protocol if missing, ensure /v1 suffix
|
// Normalize the URL: add protocol if missing, ensure /v1 suffix
|
||||||
|
|||||||
422
packages/core/src/services/chatCompressionService.test.ts
Normal file
422
packages/core/src/services/chatCompressionService.test.ts
Normal file
@@ -0,0 +1,422 @@
|
|||||||
|
/**
|
||||||
|
* @license
|
||||||
|
* Copyright 2025 Google LLC
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||||
|
import {
|
||||||
|
ChatCompressionService,
|
||||||
|
findCompressSplitPoint,
|
||||||
|
} from './chatCompressionService.js';
|
||||||
|
import type { Content, GenerateContentResponse } from '@google/genai';
|
||||||
|
import { CompressionStatus } from '../core/turn.js';
|
||||||
|
import { uiTelemetryService } from '../telemetry/uiTelemetry.js';
|
||||||
|
import { tokenLimit } from '../core/tokenLimits.js';
|
||||||
|
import type { GeminiChat } from '../core/geminiChat.js';
|
||||||
|
import type { Config } from '../config/config.js';
|
||||||
|
import { getInitialChatHistory } from '../utils/environmentContext.js';
|
||||||
|
import type { ContentGenerator } from '../core/contentGenerator.js';
|
||||||
|
|
||||||
|
vi.mock('../telemetry/uiTelemetry.js');
|
||||||
|
vi.mock('../core/tokenLimits.js');
|
||||||
|
vi.mock('../telemetry/loggers.js');
|
||||||
|
vi.mock('../utils/environmentContext.js');
|
||||||
|
|
||||||
|
describe('findCompressSplitPoint', () => {
|
||||||
|
it('should throw an error for non-positive numbers', () => {
|
||||||
|
expect(() => findCompressSplitPoint([], 0)).toThrow(
|
||||||
|
'Fraction must be between 0 and 1',
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should throw an error for a fraction greater than or equal to 1', () => {
|
||||||
|
expect(() => findCompressSplitPoint([], 1)).toThrow(
|
||||||
|
'Fraction must be between 0 and 1',
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle an empty history', () => {
|
||||||
|
expect(findCompressSplitPoint([], 0.5)).toBe(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle a fraction in the middle', () => {
|
||||||
|
const history: Content[] = [
|
||||||
|
{ role: 'user', parts: [{ text: 'This is the first message.' }] }, // JSON length: 66 (19%)
|
||||||
|
{ role: 'model', parts: [{ text: 'This is the second message.' }] }, // JSON length: 68 (40%)
|
||||||
|
{ role: 'user', parts: [{ text: 'This is the third message.' }] }, // JSON length: 66 (60%)
|
||||||
|
{ role: 'model', parts: [{ text: 'This is the fourth message.' }] }, // JSON length: 68 (80%)
|
||||||
|
{ role: 'user', parts: [{ text: 'This is the fifth message.' }] }, // JSON length: 65 (100%)
|
||||||
|
];
|
||||||
|
expect(findCompressSplitPoint(history, 0.5)).toBe(4);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle a fraction of last index', () => {
|
||||||
|
const history: Content[] = [
|
||||||
|
{ role: 'user', parts: [{ text: 'This is the first message.' }] }, // JSON length: 66 (19%)
|
||||||
|
{ role: 'model', parts: [{ text: 'This is the second message.' }] }, // JSON length: 68 (40%)
|
||||||
|
{ role: 'user', parts: [{ text: 'This is the third message.' }] }, // JSON length: 66 (60%)
|
||||||
|
{ role: 'model', parts: [{ text: 'This is the fourth message.' }] }, // JSON length: 68 (80%)
|
||||||
|
{ role: 'user', parts: [{ text: 'This is the fifth message.' }] }, // JSON length: 65 (100%)
|
||||||
|
];
|
||||||
|
expect(findCompressSplitPoint(history, 0.9)).toBe(4);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle a fraction of after last index', () => {
|
||||||
|
const history: Content[] = [
|
||||||
|
{ role: 'user', parts: [{ text: 'This is the first message.' }] }, // JSON length: 66 (24%)
|
||||||
|
{ role: 'model', parts: [{ text: 'This is the second message.' }] }, // JSON length: 68 (50%)
|
||||||
|
{ role: 'user', parts: [{ text: 'This is the third message.' }] }, // JSON length: 66 (74%)
|
||||||
|
{ role: 'model', parts: [{ text: 'This is the fourth message.' }] }, // JSON length: 68 (100%)
|
||||||
|
];
|
||||||
|
expect(findCompressSplitPoint(history, 0.8)).toBe(4);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return earlier splitpoint if no valid ones are after threshhold', () => {
|
||||||
|
const history: Content[] = [
|
||||||
|
{ role: 'user', parts: [{ text: 'This is the first message.' }] },
|
||||||
|
{ role: 'model', parts: [{ text: 'This is the second message.' }] },
|
||||||
|
{ role: 'user', parts: [{ text: 'This is the third message.' }] },
|
||||||
|
{ role: 'model', parts: [{ functionCall: { name: 'foo', args: {} } }] },
|
||||||
|
];
|
||||||
|
// Can't return 4 because the previous item has a function call.
|
||||||
|
expect(findCompressSplitPoint(history, 0.99)).toBe(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle a history with only one item', () => {
|
||||||
|
const historyWithEmptyParts: Content[] = [
|
||||||
|
{ role: 'user', parts: [{ text: 'Message 1' }] },
|
||||||
|
];
|
||||||
|
expect(findCompressSplitPoint(historyWithEmptyParts, 0.5)).toBe(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle history with weird parts', () => {
|
||||||
|
const historyWithEmptyParts: Content[] = [
|
||||||
|
{ role: 'user', parts: [{ text: 'Message 1' }] },
|
||||||
|
{
|
||||||
|
role: 'model',
|
||||||
|
parts: [{ fileData: { fileUri: 'derp', mimeType: 'text/plain' } }],
|
||||||
|
},
|
||||||
|
{ role: 'user', parts: [{ text: 'Message 2' }] },
|
||||||
|
];
|
||||||
|
expect(findCompressSplitPoint(historyWithEmptyParts, 0.5)).toBe(2);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('ChatCompressionService', () => {
|
||||||
|
let service: ChatCompressionService;
|
||||||
|
let mockChat: GeminiChat;
|
||||||
|
let mockConfig: Config;
|
||||||
|
const mockModel = 'gemini-pro';
|
||||||
|
const mockPromptId = 'test-prompt-id';
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
service = new ChatCompressionService();
|
||||||
|
mockChat = {
|
||||||
|
getHistory: vi.fn(),
|
||||||
|
} as unknown as GeminiChat;
|
||||||
|
mockConfig = {
|
||||||
|
getChatCompression: vi.fn(),
|
||||||
|
getContentGenerator: vi.fn(),
|
||||||
|
} as unknown as Config;
|
||||||
|
|
||||||
|
vi.mocked(tokenLimit).mockReturnValue(1000);
|
||||||
|
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(500);
|
||||||
|
vi.mocked(getInitialChatHistory).mockImplementation(
|
||||||
|
async (_config, extraHistory) => extraHistory || [],
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
vi.restoreAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return NOOP if history is empty', async () => {
|
||||||
|
vi.mocked(mockChat.getHistory).mockReturnValue([]);
|
||||||
|
const result = await service.compress(
|
||||||
|
mockChat,
|
||||||
|
mockPromptId,
|
||||||
|
false,
|
||||||
|
mockModel,
|
||||||
|
mockConfig,
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
expect(result.info.compressionStatus).toBe(CompressionStatus.NOOP);
|
||||||
|
expect(result.newHistory).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return NOOP if previously failed and not forced', async () => {
|
||||||
|
vi.mocked(mockChat.getHistory).mockReturnValue([
|
||||||
|
{ role: 'user', parts: [{ text: 'hi' }] },
|
||||||
|
]);
|
||||||
|
const result = await service.compress(
|
||||||
|
mockChat,
|
||||||
|
mockPromptId,
|
||||||
|
false,
|
||||||
|
mockModel,
|
||||||
|
mockConfig,
|
||||||
|
true,
|
||||||
|
);
|
||||||
|
expect(result.info.compressionStatus).toBe(CompressionStatus.NOOP);
|
||||||
|
expect(result.newHistory).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return NOOP if under token threshold and not forced', async () => {
|
||||||
|
vi.mocked(mockChat.getHistory).mockReturnValue([
|
||||||
|
{ role: 'user', parts: [{ text: 'hi' }] },
|
||||||
|
]);
|
||||||
|
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(600);
|
||||||
|
vi.mocked(tokenLimit).mockReturnValue(1000);
|
||||||
|
// Threshold is 0.7 * 1000 = 700. 600 < 700, so NOOP.
|
||||||
|
|
||||||
|
const result = await service.compress(
|
||||||
|
mockChat,
|
||||||
|
mockPromptId,
|
||||||
|
false,
|
||||||
|
mockModel,
|
||||||
|
mockConfig,
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
expect(result.info.compressionStatus).toBe(CompressionStatus.NOOP);
|
||||||
|
expect(result.newHistory).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return NOOP when contextPercentageThreshold is 0', async () => {
|
||||||
|
const history: Content[] = [
|
||||||
|
{ role: 'user', parts: [{ text: 'msg1' }] },
|
||||||
|
{ role: 'model', parts: [{ text: 'msg2' }] },
|
||||||
|
];
|
||||||
|
vi.mocked(mockChat.getHistory).mockReturnValue(history);
|
||||||
|
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(800);
|
||||||
|
vi.mocked(mockConfig.getChatCompression).mockReturnValue({
|
||||||
|
contextPercentageThreshold: 0,
|
||||||
|
});
|
||||||
|
|
||||||
|
const mockGenerateContent = vi.fn();
|
||||||
|
vi.mocked(mockConfig.getContentGenerator).mockReturnValue({
|
||||||
|
generateContent: mockGenerateContent,
|
||||||
|
} as unknown as ContentGenerator);
|
||||||
|
|
||||||
|
const result = await service.compress(
|
||||||
|
mockChat,
|
||||||
|
mockPromptId,
|
||||||
|
false,
|
||||||
|
mockModel,
|
||||||
|
mockConfig,
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.info).toMatchObject({
|
||||||
|
compressionStatus: CompressionStatus.NOOP,
|
||||||
|
originalTokenCount: 0,
|
||||||
|
newTokenCount: 0,
|
||||||
|
});
|
||||||
|
expect(mockGenerateContent).not.toHaveBeenCalled();
|
||||||
|
expect(tokenLimit).not.toHaveBeenCalled();
|
||||||
|
|
||||||
|
const forcedResult = await service.compress(
|
||||||
|
mockChat,
|
||||||
|
mockPromptId,
|
||||||
|
true,
|
||||||
|
mockModel,
|
||||||
|
mockConfig,
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
expect(forcedResult.info).toMatchObject({
|
||||||
|
compressionStatus: CompressionStatus.NOOP,
|
||||||
|
originalTokenCount: 0,
|
||||||
|
newTokenCount: 0,
|
||||||
|
});
|
||||||
|
expect(mockGenerateContent).not.toHaveBeenCalled();
|
||||||
|
expect(tokenLimit).not.toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should compress if over token threshold', async () => {
|
||||||
|
const history: Content[] = [
|
||||||
|
{ role: 'user', parts: [{ text: 'msg1' }] },
|
||||||
|
{ role: 'model', parts: [{ text: 'msg2' }] },
|
||||||
|
{ role: 'user', parts: [{ text: 'msg3' }] },
|
||||||
|
{ role: 'model', parts: [{ text: 'msg4' }] },
|
||||||
|
];
|
||||||
|
vi.mocked(mockChat.getHistory).mockReturnValue(history);
|
||||||
|
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(800);
|
||||||
|
vi.mocked(tokenLimit).mockReturnValue(1000);
|
||||||
|
const mockGenerateContent = vi.fn().mockResolvedValue({
|
||||||
|
candidates: [
|
||||||
|
{
|
||||||
|
content: {
|
||||||
|
parts: [{ text: 'Summary' }],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
} as unknown as GenerateContentResponse);
|
||||||
|
vi.mocked(mockConfig.getContentGenerator).mockReturnValue({
|
||||||
|
generateContent: mockGenerateContent,
|
||||||
|
} as unknown as ContentGenerator);
|
||||||
|
|
||||||
|
const result = await service.compress(
|
||||||
|
mockChat,
|
||||||
|
mockPromptId,
|
||||||
|
false,
|
||||||
|
mockModel,
|
||||||
|
mockConfig,
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.info.compressionStatus).toBe(CompressionStatus.COMPRESSED);
|
||||||
|
expect(result.newHistory).not.toBeNull();
|
||||||
|
expect(result.newHistory![0].parts![0].text).toBe('Summary');
|
||||||
|
expect(mockGenerateContent).toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should force compress even if under threshold', async () => {
|
||||||
|
const history: Content[] = [
|
||||||
|
{ role: 'user', parts: [{ text: 'msg1' }] },
|
||||||
|
{ role: 'model', parts: [{ text: 'msg2' }] },
|
||||||
|
{ role: 'user', parts: [{ text: 'msg3' }] },
|
||||||
|
{ role: 'model', parts: [{ text: 'msg4' }] },
|
||||||
|
];
|
||||||
|
vi.mocked(mockChat.getHistory).mockReturnValue(history);
|
||||||
|
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(100);
|
||||||
|
vi.mocked(tokenLimit).mockReturnValue(1000);
|
||||||
|
|
||||||
|
const mockGenerateContent = vi.fn().mockResolvedValue({
|
||||||
|
candidates: [
|
||||||
|
{
|
||||||
|
content: {
|
||||||
|
parts: [{ text: 'Summary' }],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
} as unknown as GenerateContentResponse);
|
||||||
|
vi.mocked(mockConfig.getContentGenerator).mockReturnValue({
|
||||||
|
generateContent: mockGenerateContent,
|
||||||
|
} as unknown as ContentGenerator);
|
||||||
|
|
||||||
|
const result = await service.compress(
|
||||||
|
mockChat,
|
||||||
|
mockPromptId,
|
||||||
|
true, // forced
|
||||||
|
mockModel,
|
||||||
|
mockConfig,
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.info.compressionStatus).toBe(CompressionStatus.COMPRESSED);
|
||||||
|
expect(result.newHistory).not.toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return FAILED if new token count is inflated', async () => {
|
||||||
|
const history: Content[] = [
|
||||||
|
{ role: 'user', parts: [{ text: 'msg1' }] },
|
||||||
|
{ role: 'model', parts: [{ text: 'msg2' }] },
|
||||||
|
];
|
||||||
|
vi.mocked(mockChat.getHistory).mockReturnValue(history);
|
||||||
|
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(10);
|
||||||
|
vi.mocked(tokenLimit).mockReturnValue(1000);
|
||||||
|
|
||||||
|
const longSummary = 'a'.repeat(1000); // Long summary to inflate token count
|
||||||
|
const mockGenerateContent = vi.fn().mockResolvedValue({
|
||||||
|
candidates: [
|
||||||
|
{
|
||||||
|
content: {
|
||||||
|
parts: [{ text: longSummary }],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
} as unknown as GenerateContentResponse);
|
||||||
|
vi.mocked(mockConfig.getContentGenerator).mockReturnValue({
|
||||||
|
generateContent: mockGenerateContent,
|
||||||
|
} as unknown as ContentGenerator);
|
||||||
|
|
||||||
|
const result = await service.compress(
|
||||||
|
mockChat,
|
||||||
|
mockPromptId,
|
||||||
|
true,
|
||||||
|
mockModel,
|
||||||
|
mockConfig,
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.info.compressionStatus).toBe(
|
||||||
|
CompressionStatus.COMPRESSION_FAILED_INFLATED_TOKEN_COUNT,
|
||||||
|
);
|
||||||
|
expect(result.newHistory).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return FAILED if summary is empty string', async () => {
|
||||||
|
const history: Content[] = [
|
||||||
|
{ role: 'user', parts: [{ text: 'msg1' }] },
|
||||||
|
{ role: 'model', parts: [{ text: 'msg2' }] },
|
||||||
|
];
|
||||||
|
vi.mocked(mockChat.getHistory).mockReturnValue(history);
|
||||||
|
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(100);
|
||||||
|
vi.mocked(tokenLimit).mockReturnValue(1000);
|
||||||
|
|
||||||
|
const mockGenerateContent = vi.fn().mockResolvedValue({
|
||||||
|
candidates: [
|
||||||
|
{
|
||||||
|
content: {
|
||||||
|
parts: [{ text: '' }], // Empty summary
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
} as unknown as GenerateContentResponse);
|
||||||
|
vi.mocked(mockConfig.getContentGenerator).mockReturnValue({
|
||||||
|
generateContent: mockGenerateContent,
|
||||||
|
} as unknown as ContentGenerator);
|
||||||
|
|
||||||
|
const result = await service.compress(
|
||||||
|
mockChat,
|
||||||
|
mockPromptId,
|
||||||
|
true,
|
||||||
|
mockModel,
|
||||||
|
mockConfig,
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.info.compressionStatus).toBe(
|
||||||
|
CompressionStatus.COMPRESSION_FAILED_EMPTY_SUMMARY,
|
||||||
|
);
|
||||||
|
expect(result.newHistory).toBeNull();
|
||||||
|
expect(result.info.originalTokenCount).toBe(100);
|
||||||
|
expect(result.info.newTokenCount).toBe(100);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return FAILED if summary is only whitespace', async () => {
|
||||||
|
const history: Content[] = [
|
||||||
|
{ role: 'user', parts: [{ text: 'msg1' }] },
|
||||||
|
{ role: 'model', parts: [{ text: 'msg2' }] },
|
||||||
|
];
|
||||||
|
vi.mocked(mockChat.getHistory).mockReturnValue(history);
|
||||||
|
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(100);
|
||||||
|
vi.mocked(tokenLimit).mockReturnValue(1000);
|
||||||
|
|
||||||
|
const mockGenerateContent = vi.fn().mockResolvedValue({
|
||||||
|
candidates: [
|
||||||
|
{
|
||||||
|
content: {
|
||||||
|
parts: [{ text: ' \n\t ' }], // Only whitespace
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
} as unknown as GenerateContentResponse);
|
||||||
|
vi.mocked(mockConfig.getContentGenerator).mockReturnValue({
|
||||||
|
generateContent: mockGenerateContent,
|
||||||
|
} as unknown as ContentGenerator);
|
||||||
|
|
||||||
|
const result = await service.compress(
|
||||||
|
mockChat,
|
||||||
|
mockPromptId,
|
||||||
|
true,
|
||||||
|
mockModel,
|
||||||
|
mockConfig,
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
|
||||||
|
expect(result.info.compressionStatus).toBe(
|
||||||
|
CompressionStatus.COMPRESSION_FAILED_EMPTY_SUMMARY,
|
||||||
|
);
|
||||||
|
expect(result.newHistory).toBeNull();
|
||||||
|
});
|
||||||
|
});
|
||||||
234
packages/core/src/services/chatCompressionService.ts
Normal file
234
packages/core/src/services/chatCompressionService.ts
Normal file
@@ -0,0 +1,234 @@
|
|||||||
|
/**
|
||||||
|
* @license
|
||||||
|
* Copyright 2025 Google LLC
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*/
|
||||||
|
|
||||||
|
import type { Content } from '@google/genai';
|
||||||
|
import type { Config } from '../config/config.js';
|
||||||
|
import type { GeminiChat } from '../core/geminiChat.js';
|
||||||
|
import { type ChatCompressionInfo, CompressionStatus } from '../core/turn.js';
|
||||||
|
import { uiTelemetryService } from '../telemetry/uiTelemetry.js';
|
||||||
|
import { tokenLimit } from '../core/tokenLimits.js';
|
||||||
|
import { getCompressionPrompt } from '../core/prompts.js';
|
||||||
|
import { getResponseText } from '../utils/partUtils.js';
|
||||||
|
import { logChatCompression } from '../telemetry/loggers.js';
|
||||||
|
import { makeChatCompressionEvent } from '../telemetry/types.js';
|
||||||
|
import { getInitialChatHistory } from '../utils/environmentContext.js';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Threshold for compression token count as a fraction of the model's token limit.
|
||||||
|
* If the chat history exceeds this threshold, it will be compressed.
|
||||||
|
*/
|
||||||
|
export const COMPRESSION_TOKEN_THRESHOLD = 0.7;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* The fraction of the latest chat history to keep. A value of 0.3
|
||||||
|
* means that only the last 30% of the chat history will be kept after compression.
|
||||||
|
*/
|
||||||
|
export const COMPRESSION_PRESERVE_THRESHOLD = 0.3;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Returns the index of the oldest item to keep when compressing. May return
|
||||||
|
* contents.length which indicates that everything should be compressed.
|
||||||
|
*
|
||||||
|
* Exported for testing purposes.
|
||||||
|
*/
|
||||||
|
export function findCompressSplitPoint(
|
||||||
|
contents: Content[],
|
||||||
|
fraction: number,
|
||||||
|
): number {
|
||||||
|
if (fraction <= 0 || fraction >= 1) {
|
||||||
|
throw new Error('Fraction must be between 0 and 1');
|
||||||
|
}
|
||||||
|
|
||||||
|
const charCounts = contents.map((content) => JSON.stringify(content).length);
|
||||||
|
const totalCharCount = charCounts.reduce((a, b) => a + b, 0);
|
||||||
|
const targetCharCount = totalCharCount * fraction;
|
||||||
|
|
||||||
|
let lastSplitPoint = 0; // 0 is always valid (compress nothing)
|
||||||
|
let cumulativeCharCount = 0;
|
||||||
|
for (let i = 0; i < contents.length; i++) {
|
||||||
|
const content = contents[i];
|
||||||
|
if (
|
||||||
|
content.role === 'user' &&
|
||||||
|
!content.parts?.some((part) => !!part.functionResponse)
|
||||||
|
) {
|
||||||
|
if (cumulativeCharCount >= targetCharCount) {
|
||||||
|
return i;
|
||||||
|
}
|
||||||
|
lastSplitPoint = i;
|
||||||
|
}
|
||||||
|
cumulativeCharCount += charCounts[i];
|
||||||
|
}
|
||||||
|
|
||||||
|
// We found no split points after targetCharCount.
|
||||||
|
// Check if it's safe to compress everything.
|
||||||
|
const lastContent = contents[contents.length - 1];
|
||||||
|
if (
|
||||||
|
lastContent?.role === 'model' &&
|
||||||
|
!lastContent?.parts?.some((part) => part.functionCall)
|
||||||
|
) {
|
||||||
|
return contents.length;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Can't compress everything so just compress at last splitpoint.
|
||||||
|
return lastSplitPoint;
|
||||||
|
}
|
||||||
|
|
||||||
|
export class ChatCompressionService {
|
||||||
|
async compress(
|
||||||
|
chat: GeminiChat,
|
||||||
|
promptId: string,
|
||||||
|
force: boolean,
|
||||||
|
model: string,
|
||||||
|
config: Config,
|
||||||
|
hasFailedCompressionAttempt: boolean,
|
||||||
|
): Promise<{ newHistory: Content[] | null; info: ChatCompressionInfo }> {
|
||||||
|
const curatedHistory = chat.getHistory(true);
|
||||||
|
const threshold =
|
||||||
|
config.getChatCompression()?.contextPercentageThreshold ??
|
||||||
|
COMPRESSION_TOKEN_THRESHOLD;
|
||||||
|
|
||||||
|
// Regardless of `force`, don't do anything if the history is empty.
|
||||||
|
if (
|
||||||
|
curatedHistory.length === 0 ||
|
||||||
|
threshold <= 0 ||
|
||||||
|
(hasFailedCompressionAttempt && !force)
|
||||||
|
) {
|
||||||
|
return {
|
||||||
|
newHistory: null,
|
||||||
|
info: {
|
||||||
|
originalTokenCount: 0,
|
||||||
|
newTokenCount: 0,
|
||||||
|
compressionStatus: CompressionStatus.NOOP,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
const originalTokenCount = uiTelemetryService.getLastPromptTokenCount();
|
||||||
|
|
||||||
|
// Don't compress if not forced and we are under the limit.
|
||||||
|
if (!force) {
|
||||||
|
if (originalTokenCount < threshold * tokenLimit(model)) {
|
||||||
|
return {
|
||||||
|
newHistory: null,
|
||||||
|
info: {
|
||||||
|
originalTokenCount,
|
||||||
|
newTokenCount: originalTokenCount,
|
||||||
|
compressionStatus: CompressionStatus.NOOP,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const splitPoint = findCompressSplitPoint(
|
||||||
|
curatedHistory,
|
||||||
|
1 - COMPRESSION_PRESERVE_THRESHOLD,
|
||||||
|
);
|
||||||
|
|
||||||
|
const historyToCompress = curatedHistory.slice(0, splitPoint);
|
||||||
|
const historyToKeep = curatedHistory.slice(splitPoint);
|
||||||
|
|
||||||
|
if (historyToCompress.length === 0) {
|
||||||
|
return {
|
||||||
|
newHistory: null,
|
||||||
|
info: {
|
||||||
|
originalTokenCount,
|
||||||
|
newTokenCount: originalTokenCount,
|
||||||
|
compressionStatus: CompressionStatus.NOOP,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
const summaryResponse = await config.getContentGenerator().generateContent(
|
||||||
|
{
|
||||||
|
model,
|
||||||
|
contents: [
|
||||||
|
...historyToCompress,
|
||||||
|
{
|
||||||
|
role: 'user',
|
||||||
|
parts: [
|
||||||
|
{
|
||||||
|
text: 'First, reason in your scratchpad. Then, generate the <state_snapshot>.',
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
config: {
|
||||||
|
systemInstruction: getCompressionPrompt(),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
promptId,
|
||||||
|
);
|
||||||
|
const summary = getResponseText(summaryResponse) ?? '';
|
||||||
|
const isSummaryEmpty = !summary || summary.trim().length === 0;
|
||||||
|
|
||||||
|
let newTokenCount = originalTokenCount;
|
||||||
|
let extraHistory: Content[] = [];
|
||||||
|
|
||||||
|
if (!isSummaryEmpty) {
|
||||||
|
extraHistory = [
|
||||||
|
{
|
||||||
|
role: 'user',
|
||||||
|
parts: [{ text: summary }],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
role: 'model',
|
||||||
|
parts: [{ text: 'Got it. Thanks for the additional context!' }],
|
||||||
|
},
|
||||||
|
...historyToKeep,
|
||||||
|
];
|
||||||
|
|
||||||
|
// Use a shared utility to construct the initial history for an accurate token count.
|
||||||
|
const fullNewHistory = await getInitialChatHistory(config, extraHistory);
|
||||||
|
|
||||||
|
// Estimate token count 1 token ≈ 4 characters
|
||||||
|
newTokenCount = Math.floor(
|
||||||
|
fullNewHistory.reduce(
|
||||||
|
(total, content) => total + JSON.stringify(content).length,
|
||||||
|
0,
|
||||||
|
) / 4,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
logChatCompression(
|
||||||
|
config,
|
||||||
|
makeChatCompressionEvent({
|
||||||
|
tokens_before: originalTokenCount,
|
||||||
|
tokens_after: newTokenCount,
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
if (isSummaryEmpty) {
|
||||||
|
return {
|
||||||
|
newHistory: null,
|
||||||
|
info: {
|
||||||
|
originalTokenCount,
|
||||||
|
newTokenCount: originalTokenCount,
|
||||||
|
compressionStatus: CompressionStatus.COMPRESSION_FAILED_EMPTY_SUMMARY,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
} else if (newTokenCount > originalTokenCount) {
|
||||||
|
return {
|
||||||
|
newHistory: null,
|
||||||
|
info: {
|
||||||
|
originalTokenCount,
|
||||||
|
newTokenCount,
|
||||||
|
compressionStatus:
|
||||||
|
CompressionStatus.COMPRESSION_FAILED_INFLATED_TOKEN_COUNT,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
} else {
|
||||||
|
uiTelemetryService.setLastPromptTokenCount(newTokenCount);
|
||||||
|
return {
|
||||||
|
newHistory: extraHistory,
|
||||||
|
info: {
|
||||||
|
originalTokenCount,
|
||||||
|
newTokenCount,
|
||||||
|
compressionStatus: CompressionStatus.COMPRESSED,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -62,9 +62,10 @@ export type {
|
|||||||
SubAgentToolResultEvent,
|
SubAgentToolResultEvent,
|
||||||
SubAgentFinishEvent,
|
SubAgentFinishEvent,
|
||||||
SubAgentErrorEvent,
|
SubAgentErrorEvent,
|
||||||
|
SubAgentApprovalRequestEvent,
|
||||||
} from './subagent-events.js';
|
} from './subagent-events.js';
|
||||||
|
|
||||||
export { SubAgentEventEmitter } from './subagent-events.js';
|
export { SubAgentEventEmitter, SubAgentEventType } from './subagent-events.js';
|
||||||
|
|
||||||
// Statistics and formatting
|
// Statistics and formatting
|
||||||
export type {
|
export type {
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user