Compare commits

..

1 Commits

Author SHA1 Message Date
pomelo-nwu
65112f7ebd feat: fix sessionId 2025-10-30 16:47:51 +08:00
75 changed files with 2020 additions and 5244 deletions

View File

@@ -309,8 +309,7 @@ If you are experiencing performance issues with file searching (e.g., with `@` c
```
- **`tavilyApiKey`** (string):
- **Description:** API key for Tavily web search service. Used to enable the `web_search` tool functionality.
- **Note:** This is a legacy configuration format. For Qwen OAuth users, DashScope provider is automatically available without any configuration. For other authentication types, configure Tavily or Google providers using the new `webSearch` configuration format.
- **Description:** API key for Tavily web search service. Required to enable the `web_search` tool functionality. If not configured, the web search tool will be disabled and skipped.
- **Default:** `undefined` (web search disabled)
- **Example:** `"tavilyApiKey": "tvly-your-api-key-here"`
- **`chatCompression`** (object):
@@ -466,8 +465,8 @@ The CLI automatically loads environment variables from an `.env` file. The loadi
- This is useful for development and testing.
- **`TAVILY_API_KEY`**:
- Your API key for the Tavily web search service.
- Used to enable the `web_search` tool functionality.
- **Note:** For Qwen OAuth users, DashScope provider is automatically available without any configuration. For other authentication types, configure Tavily or Google providers to enable web search.
- Required to enable the `web_search` tool functionality.
- If not configured, the web search tool will be disabled and skipped.
- Example: `export TAVILY_API_KEY="tvly-your-api-key-here"`
## Command-Line Arguments
@@ -541,9 +540,6 @@ Arguments passed directly when running the CLI can override other configurations
- Displays the version of the CLI.
- **`--openai-logging`**:
- Enables logging of OpenAI API calls for debugging and analysis. This flag overrides the `enableOpenAILogging` setting in `settings.json`.
- **`--openai-logging-dir <directory>`**:
- Sets a custom directory path for OpenAI API logs. This flag overrides the `openAILoggingDir` setting in `settings.json`. Supports absolute paths, relative paths, and `~` expansion.
- **Example:** `qwen --openai-logging-dir "~/qwen-logs" --openai-logging`
- **`--tavily-api-key <api_key>`**:
- Sets the Tavily API key for web search functionality for this session.
- Example: `qwen --tavily-api-key tvly-your-api-key-here`

View File

@@ -171,18 +171,6 @@ Settings are organized into categories. All settings should be placed within the
- **Description:** Disables loop detection checks. Loop detection prevents infinite loops in AI responses but can generate false positives that interrupt legitimate workflows. Enable this option if you experience frequent false positive loop detection interruptions.
- **Default:** `false`
- **`model.enableOpenAILogging`** (boolean):
- **Description:** Enables logging of OpenAI API calls for debugging and analysis. When enabled, API requests and responses are logged to JSON files.
- **Default:** `false`
- **`model.openAILoggingDir`** (string):
- **Description:** Custom directory path for OpenAI API logs. If not specified, defaults to `logs/openai` in the current working directory. Supports absolute paths, relative paths (resolved from current working directory), and `~` expansion (home directory).
- **Default:** `undefined`
- **Examples:**
- `"~/qwen-logs"` - Logs to `~/qwen-logs` directory
- `"./custom-logs"` - Logs to `./custom-logs` relative to current directory
- `"/tmp/openai-logs"` - Logs to absolute path `/tmp/openai-logs`
#### `context`
- **`context.fileName`** (string or array of strings):
@@ -258,14 +246,6 @@ Settings are organized into categories. All settings should be placed within the
- It must return function output as JSON on `stdout`, analogous to [`functionResponse.response.content`](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#functionresponse).
- **Default:** `undefined`
- **`tools.useRipgrep`** (boolean):
- **Description:** Use ripgrep for file content search instead of the fallback implementation. Provides faster search performance.
- **Default:** `true`
- **`tools.useBuiltinRipgrep`** (boolean):
- **Description:** Use the bundled ripgrep binary. When set to `false`, the system-level `rg` command will be used instead. This setting is only effective when `tools.useRipgrep` is `true`.
- **Default:** `true`
#### `mcp`
- **`mcp.serverCommand`** (string):
@@ -317,8 +297,7 @@ Settings are organized into categories. All settings should be placed within the
- **Default:** `undefined`
- **`advanced.tavilyApiKey`** (string):
- **Description:** API key for Tavily web search service. Used to enable the `web_search` tool functionality.
- **Note:** This is a legacy configuration format. For Qwen OAuth users, DashScope provider is automatically available without any configuration. For other authentication types, configure Tavily or Google providers using the new `webSearch` configuration format.
- **Description:** API key for Tavily web search service. Required to enable the `web_search` tool functionality. If not configured, the web search tool will be disabled and skipped.
- **Default:** `undefined`
#### `mcpServers`
@@ -399,8 +378,6 @@ Here is an example of a `settings.json` file with the nested structure, new as o
"model": {
"name": "qwen3-coder-plus",
"maxSessionTurns": 10,
"enableOpenAILogging": false,
"openAILoggingDir": "~/qwen-logs",
"summarizeToolOutput": {
"run_shell_command": {
"tokenBudget": 100
@@ -489,8 +466,8 @@ The CLI automatically loads environment variables from an `.env` file. The loadi
- Set to a string to customize the title of the CLI.
- **`TAVILY_API_KEY`**:
- Your API key for the Tavily web search service.
- Used to enable the `web_search` tool functionality.
- **Note:** For Qwen OAuth users, DashScope provider is automatically available without any configuration. For other authentication types, configure Tavily or Google providers to enable web search.
- Required to enable the `web_search` tool functionality.
- If not configured, the web search tool will be disabled and skipped.
- Example: `export TAVILY_API_KEY="tvly-your-api-key-here"`
## Command-Line Arguments
@@ -571,9 +548,6 @@ Arguments passed directly when running the CLI can override other configurations
- Displays the version of the CLI.
- **`--openai-logging`**:
- Enables logging of OpenAI API calls for debugging and analysis. This flag overrides the `enableOpenAILogging` setting in `settings.json`.
- **`--openai-logging-dir <directory>`**:
- Sets a custom directory path for OpenAI API logs. This flag overrides the `openAILoggingDir` setting in `settings.json`. Supports absolute paths, relative paths, and `~` expansion.
- **Example:** `qwen --openai-logging-dir "~/qwen-logs" --openai-logging`
- **`--tavily-api-key <api_key>`**:
- Sets the Tavily API key for web search functionality for this session.
- Example: `qwen --tavily-api-key tvly-your-api-key-here`

View File

@@ -68,66 +68,72 @@ Qwen Code provides a comprehensive suite of tools for interacting with the local
- **File:** `glob.ts`
- **Parameters:**
- `pattern` (string, required): The glob pattern to match against (e.g., `"*.py"`, `"src/**/*.js"`).
- `path` (string, optional): The directory to search in. If not specified, the current working directory will be used.
- `path` (string, optional): The absolute path to the directory to search within. If omitted, searches the tool's root directory.
- `case_sensitive` (boolean, optional): Whether the search should be case-sensitive. Defaults to `false`.
- `respect_git_ignore` (boolean, optional): Whether to respect .gitignore patterns when finding files. Defaults to `true`.
- **Behavior:**
- Searches for files matching the glob pattern within the specified directory.
- Returns a list of absolute paths, sorted with the most recently modified files first.
- Respects .gitignore and .qwenignore patterns by default.
- Limits results to 100 files to prevent context overflow.
- **Output (`llmContent`):** A message like: `Found 5 file(s) matching "*.ts" within /path/to/search/dir, sorted by modification time (newest first):\n---\n/path/to/file1.ts\n/path/to/subdir/file2.ts\n---\n[95 files truncated] ...`
- Ignores common nuisance directories like `node_modules` and `.git` by default.
- **Output (`llmContent`):** A message like: `Found 5 file(s) matching "*.ts" within src, sorted by modification time (newest first):\nsrc/file1.ts\nsrc/subdir/file2.ts...`
- **Confirmation:** No.
## 5. `grep_search` (Grep)
## 5. `search_file_content` (SearchText)
`grep_search` searches for a regular expression pattern within the content of files in a specified directory. Can filter files by a glob pattern. Returns the lines containing matches, along with their file paths and line numbers.
`search_file_content` searches for a regular expression pattern within the content of files in a specified directory. Can filter files by a glob pattern. Returns the lines containing matches, along with their file paths and line numbers.
- **Tool name:** `grep_search`
- **Display name:** Grep
- **File:** `ripGrep.ts` (with `grep.ts` as fallback)
- **Tool name:** `search_file_content`
- **Display name:** SearchText
- **File:** `grep.ts`
- **Parameters:**
- `pattern` (string, required): The regular expression pattern to search for in file contents (e.g., `"function\\s+myFunction"`, `"log.*Error"`).
- `path` (string, optional): File or directory to search in. Defaults to current working directory.
- `glob` (string, optional): Glob pattern to filter files (e.g. `"*.js"`, `"src/**/*.{ts,tsx}"`).
- `limit` (number, optional): Limit output to first N matching lines. Optional - shows all matches if not specified.
- `pattern` (string, required): The regular expression (regex) to search for (e.g., `"function\s+myFunction"`).
- `path` (string, optional): The absolute path to the directory to search within. Defaults to the current working directory.
- `include` (string, optional): A glob pattern to filter which files are searched (e.g., `"*.js"`, `"src/**/*.{ts,tsx}"`). If omitted, searches most files (respecting common ignores).
- `maxResults` (number, optional): Maximum number of matches to return to prevent context overflow (default: 20, max: 100). Use lower values for broad searches, higher for specific searches.
- **Behavior:**
- Uses ripgrep for fast search when available; otherwise falls back to a JavaScript-based search implementation.
- Returns matching lines with file paths and line numbers.
- Case-insensitive by default.
- Respects .gitignore and .qwenignore patterns.
- Limits output to prevent context overflow.
- Uses `git grep` if available in a Git repository for speed; otherwise, falls back to system `grep` or a JavaScript-based search.
- Returns a list of matching lines, each prefixed with its file path (relative to the search directory) and line number.
- Limits results to a maximum of 20 matches by default to prevent context overflow. When results are truncated, shows a clear warning with guidance on refining searches.
- **Output (`llmContent`):** A formatted string of matches, e.g.:
```
Found 3 matches for pattern "myFunction" in path "." (filter: "*.ts"):
---
src/utils.ts:15:export function myFunction() {
src/utils.ts:22: myFunction.call();
src/index.ts:5:import { myFunction } from './utils';
File: src/utils.ts
L15: export function myFunction() {
L22: myFunction.call();
---
File: src/index.ts
L5: import { myFunction } from './utils';
---
[0 lines truncated] ...
WARNING: Results truncated to prevent context overflow. To see more results:
- Use a more specific pattern to reduce matches
- Add file filters with the 'include' parameter (e.g., "*.js", "src/**")
- Specify a narrower 'path' to search in a subdirectory
- Increase 'maxResults' parameter if you need more matches (current: 20)
```
- **Confirmation:** No.
### `grep_search` examples
### `search_file_content` examples
Search for a pattern with default result limiting:
```
grep_search(pattern="function\\s+myFunction", path="src")
search_file_content(pattern="function\s+myFunction", path="src")
```
Search for a pattern with custom result limiting:
```
grep_search(pattern="function", path="src", limit=50)
search_file_content(pattern="function", path="src", maxResults=50)
```
Search for a pattern with file filtering and custom result limiting:
```
grep_search(pattern="function", glob="*.js", limit=10)
search_file_content(pattern="function", include="*.js", maxResults=10)
```
## 6. `edit` (Edit)

View File

@@ -1,186 +1,43 @@
# Web Search Tool (`web_search`)
This document describes the `web_search` tool for performing web searches using multiple providers.
This document describes the `web_search` tool.
## Description
Use `web_search` to perform a web search and get information from the internet. The tool supports multiple search providers and returns a concise answer with source citations when available.
### Supported Providers
1. **DashScope** (Official, Free) - Automatically available for Qwen OAuth users (200 requests/minute, 2000 requests/day)
2. **Tavily** - High-quality search API with built-in answer generation
3. **Google Custom Search** - Google's Custom Search JSON API
Use `web_search` to perform a web search using the Tavily API. The tool returns a concise answer with sources when possible.
### Arguments
`web_search` takes two arguments:
`web_search` takes one argument:
- `query` (string, required): The search query
- `provider` (string, optional): Specific provider to use ("dashscope", "tavily", "google")
- If not specified, uses the default provider from configuration
- `query` (string, required): The search query.
## Configuration
## How to use `web_search`
### Method 1: Settings File (Recommended)
`web_search` calls the Tavily API directly. You must configure the `TAVILY_API_KEY` through one of the following methods:
Add to your `settings.json`:
1. **Settings file**: Add `"tavilyApiKey": "your-key-here"` to your `settings.json`
2. **Environment variable**: Set `TAVILY_API_KEY` in your environment or `.env` file
3. **Command line**: Use `--tavily-api-key your-key-here` when running the CLI
```json
{
"webSearch": {
"provider": [
{ "type": "dashscope" },
{ "type": "tavily", "apiKey": "tvly-xxxxx" },
{
"type": "google",
"apiKey": "your-google-api-key",
"searchEngineId": "your-search-engine-id"
}
],
"default": "dashscope"
}
}
```
If the key is not configured, the tool will be disabled and skipped.
**Notes:**
- DashScope doesn't require an API key (official, free service)
- **Qwen OAuth users:** DashScope is automatically added to your provider list, even if not explicitly configured
- Configure additional providers (Tavily, Google) if you want to use them alongside DashScope
- Set `default` to specify which provider to use by default (if not set, priority order: Tavily > Google > DashScope)
### Method 2: Environment Variables
Set environment variables in your shell or `.env` file:
```bash
# Tavily
export TAVILY_API_KEY="tvly-xxxxx"
# Google
export GOOGLE_API_KEY="your-api-key"
export GOOGLE_SEARCH_ENGINE_ID="your-engine-id"
```
### Method 3: Command Line Arguments
Pass API keys when running Qwen Code:
```bash
# Tavily
qwen --tavily-api-key tvly-xxxxx
# Google
qwen --google-api-key your-key --google-search-engine-id your-id
# Specify default provider
qwen --web-search-default tavily
```
### Backward Compatibility (Deprecated)
⚠️ **DEPRECATED:** The legacy `tavilyApiKey` configuration is still supported for backward compatibility but is deprecated:
```json
{
"advanced": {
"tavilyApiKey": "tvly-xxxxx" // ⚠️ Deprecated
}
}
```
**Important:** This configuration is deprecated and will be removed in a future version. Please migrate to the new `webSearch` configuration format shown above. The old configuration will automatically configure Tavily as a provider, but we strongly recommend updating your configuration.
## Disabling Web Search
If you want to disable the web search functionality, you can exclude the `web_search` tool in your `settings.json`:
```json
{
"tools": {
"exclude": ["web_search"]
}
}
```
**Note:** This setting requires a restart of Qwen Code to take effect. Once disabled, the `web_search` tool will not be available to the model, even if web search providers are configured.
## Usage Examples
### Basic search (using default provider)
Usage:
```
web_search(query="latest advancements in AI")
web_search(query="Your query goes here.")
```
### Search with specific provider
## `web_search` examples
Get information on a topic:
```
web_search(query="latest advancements in AI", provider="tavily")
web_search(query="latest advancements in AI-powered code generation")
```
### Real-world examples
## Important notes
```
web_search(query="weather in San Francisco today")
web_search(query="latest Node.js LTS version", provider="google")
web_search(query="best practices for React 19", provider="dashscope")
```
## Provider Details
### DashScope (Official)
- **Cost:** Free
- **Authentication:** Automatically available when using Qwen OAuth authentication
- **Configuration:** No API key required, automatically added to provider list for Qwen OAuth users
- **Quota:** 200 requests/minute, 2000 requests/day
- **Best for:** General queries, always available as fallback for Qwen OAuth users
- **Auto-registration:** If you're using Qwen OAuth, DashScope is automatically added to your provider list even if you don't configure it explicitly
### Tavily
- **Cost:** Requires API key (paid service with free tier)
- **Sign up:** https://tavily.com
- **Features:** High-quality results with AI-generated answers
- **Best for:** Research, comprehensive answers with citations
### Google Custom Search
- **Cost:** Free tier available (100 queries/day)
- **Setup:**
1. Enable Custom Search API in Google Cloud Console
2. Create a Custom Search Engine at https://programmablesearchengine.google.com
- **Features:** Google's search quality
- **Best for:** Specific, factual queries
## Important Notes
- **Response format:** Returns a concise answer with numbered source citations
- **Citations:** Source links are appended as a numbered list: [1], [2], etc.
- **Multiple providers:** If one provider fails, manually specify another using the `provider` parameter
- **DashScope availability:** Automatically available for Qwen OAuth users, no configuration needed
- **Default provider selection:** The system automatically selects a default provider based on availability:
1. Your explicit `default` configuration (highest priority)
2. CLI argument `--web-search-default`
3. First available provider by priority: Tavily > Google > DashScope
## Troubleshooting
**Tool not available?**
- **For Qwen OAuth users:** The tool is automatically registered with DashScope provider, no configuration needed
- **For other authentication types:** Ensure at least one provider (Tavily or Google) is configured
- For Tavily/Google: Verify your API keys are correct
**Provider-specific errors?**
- Use the `provider` parameter to try a different search provider
- Check your API quotas and rate limits
- Verify API keys are properly set in configuration
**Need help?**
- Check your configuration: Run `qwen` and use the settings dialog
- View your current settings in `~/.qwen-code/settings.json` (macOS/Linux) or `%USERPROFILE%\.qwen-code\settings.json` (Windows)
- **Response returned:** The `web_search` tool returns a concise answer when available, with a list of source links.
- **Citations:** Source links are appended as a numbered list.
- **API key:** Configure `TAVILY_API_KEY` via settings.json, environment variables, .env files, or command line arguments. If not configured, the tool is not registered.

View File

@@ -9,53 +9,14 @@ import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js';
describe('web_search', () => {
it('should be able to search the web', async () => {
// Check if any web search provider is available
const hasTavilyKey = !!process.env['TAVILY_API_KEY'];
const hasGoogleKey =
!!process.env['GOOGLE_API_KEY'] &&
!!process.env['GOOGLE_SEARCH_ENGINE_ID'];
// Skip if no provider is configured
// Note: DashScope provider is automatically available for Qwen OAuth users,
// but we can't easily detect that in tests without actual OAuth credentials
if (!hasTavilyKey && !hasGoogleKey) {
console.warn(
'Skipping web search test: No web search provider configured. ' +
'Set TAVILY_API_KEY or GOOGLE_API_KEY+GOOGLE_SEARCH_ENGINE_ID environment variables.',
);
// Skip if Tavily key is not configured
if (!process.env['TAVILY_API_KEY']) {
console.warn('Skipping web search test: TAVILY_API_KEY not set');
return;
}
const rig = new TestRig();
// Configure web search in settings if provider keys are available
const webSearchSettings: Record<string, unknown> = {};
const providers: Array<{
type: string;
apiKey?: string;
searchEngineId?: string;
}> = [];
if (hasTavilyKey) {
providers.push({ type: 'tavily', apiKey: process.env['TAVILY_API_KEY'] });
}
if (hasGoogleKey) {
providers.push({
type: 'google',
apiKey: process.env['GOOGLE_API_KEY'],
searchEngineId: process.env['GOOGLE_SEARCH_ENGINE_ID'],
});
}
if (providers.length > 0) {
webSearchSettings.webSearch = {
provider: providers,
default: providers[0]?.type,
};
}
await rig.setup('should be able to search the web', {
settings: webSearchSettings,
});
await rig.setup('should be able to search the web');
let result;
try {

55
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "@qwen-code/qwen-code",
"version": "0.1.5",
"version": "0.1.2",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "@qwen-code/qwen-code",
"version": "0.1.5",
"version": "0.1.2",
"workspaces": [
"packages/*"
],
@@ -555,7 +555,6 @@
}
],
"license": "MIT",
"peer": true,
"engines": {
"node": ">=18"
},
@@ -579,7 +578,6 @@
}
],
"license": "MIT",
"peer": true,
"engines": {
"node": ">=18"
}
@@ -2120,7 +2118,6 @@
"resolved": "https://registry.npmjs.org/@opentelemetry/api/-/api-1.9.0.tgz",
"integrity": "sha512-3giAOQvZiH5F9bMlMiv8+GSPMeqg0dbaeo58/0SlA9sxSqZhnUtxzX9/2FzyhS9sWQf5S0GJE0AKBrFqjpeYcg==",
"license": "Apache-2.0",
"peer": true,
"engines": {
"node": ">=8.0.0"
}
@@ -3282,7 +3279,6 @@
"resolved": "https://registry.npmjs.org/@testing-library/dom/-/dom-10.4.1.tgz",
"integrity": "sha512-o4PXJQidqJl82ckFaXUeoAW+XysPLauYI43Abki5hABd853iMhitooc6znOnczgbTYmEP6U6/y1ZyKAIsvMKGg==",
"license": "MIT",
"peer": true,
"dependencies": {
"@babel/code-frame": "^7.10.4",
"@babel/runtime": "^7.12.5",
@@ -3721,7 +3717,6 @@
"integrity": "sha512-AwAfQ2Wa5bCx9WP8nZL2uMZWod7J7/JSplxbTmBQ5ms6QpqNYm672H0Vu9ZVKVngQ+ii4R/byguVEUZQyeg44g==",
"devOptional": true,
"license": "MIT",
"peer": true,
"dependencies": {
"csstype": "^3.0.2"
}
@@ -3732,7 +3727,6 @@
"integrity": "sha512-4hOiT/dwO8Ko0gV1m/TJZYk3y0KBnY9vzDh7W+DH17b2HFSOGgdj33dhihPeuy3l0q23+4e+hoXHV6hCC4dCXw==",
"dev": true,
"license": "MIT",
"peer": true,
"peerDependencies": {
"@types/react": "^19.0.0"
}
@@ -3938,7 +3932,6 @@
"integrity": "sha512-6sMvZePQrnZH2/cJkwRpkT7DxoAWh+g6+GFRK6bV3YQo7ogi3SX5rgF6099r5Q53Ma5qeT7LGmOmuIutF4t3lA==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@typescript-eslint/scope-manager": "8.35.0",
"@typescript-eslint/types": "8.35.0",
@@ -4707,7 +4700,6 @@
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.15.0.tgz",
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
"license": "MIT",
"peer": true,
"bin": {
"acorn": "bin/acorn"
},
@@ -5062,7 +5054,8 @@
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz",
"integrity": "sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg==",
"license": "MIT"
"license": "MIT",
"peer": true
},
"node_modules/array-includes": {
"version": "3.1.9",
@@ -6227,6 +6220,7 @@
"resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-0.5.4.tgz",
"integrity": "sha512-FveZTNuGw04cxlAiWbzi6zTAL/lhehaWbTtgluJh4/E95DqMwTmha3KZN1aAWA8cFIhHzMZUvLevkw5Rqk+tSQ==",
"license": "MIT",
"peer": true,
"dependencies": {
"safe-buffer": "5.2.1"
},
@@ -7260,7 +7254,6 @@
"integrity": "sha512-GsGizj2Y1rCWDu6XoEekL3RLilp0voSePurjZIkxL3wlm5o5EC9VpgaP7lrCvjnkuLvzFBQWB3vWB3K5KQTveQ==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@eslint-community/eslint-utils": "^4.2.0",
"@eslint-community/regexpp": "^4.12.1",
@@ -7730,6 +7723,7 @@
"resolved": "https://registry.npmjs.org/express/-/express-4.21.2.tgz",
"integrity": "sha512-28HqgMZAmih1Czt9ny7qr6ek2qddF4FclbMzwhCREB6OFfH+rXAnuNCwo1/wFvrtbgsQDb4kSbX9de9lFbrXnA==",
"license": "MIT",
"peer": true,
"dependencies": {
"accepts": "~1.3.8",
"array-flatten": "1.1.1",
@@ -7791,6 +7785,7 @@
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.1.tgz",
"integrity": "sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">= 0.6"
}
@@ -7800,6 +7795,7 @@
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"license": "MIT",
"peer": true,
"dependencies": {
"ms": "2.0.0"
}
@@ -7809,6 +7805,7 @@
"resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz",
"integrity": "sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">= 0.8"
}
@@ -7975,6 +7972,7 @@
"resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.3.1.tgz",
"integrity": "sha512-6BN9trH7bp3qvnrRyzsBz+g3lZxTNZTbVO2EV1CS0WIcDbawYVdYvGflME/9QP0h0pYlCDBCTjYa9nZzMDpyxQ==",
"license": "MIT",
"peer": true,
"dependencies": {
"debug": "2.6.9",
"encodeurl": "~2.0.0",
@@ -7993,6 +7991,7 @@
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"license": "MIT",
"peer": true,
"dependencies": {
"ms": "2.0.0"
}
@@ -8001,13 +8000,15 @@
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==",
"license": "MIT"
"license": "MIT",
"peer": true
},
"node_modules/finalhandler/node_modules/statuses": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz",
"integrity": "sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">= 0.8"
}
@@ -9046,7 +9047,6 @@
"resolved": "https://registry.npmjs.org/ink/-/ink-6.2.3.tgz",
"integrity": "sha512-fQkfEJjKbLXIcVWEE3MvpYSnwtbbmRsmeNDNz1pIuOFlwE+UF2gsy228J36OXKZGWJWZJKUigphBSqCNMcARtg==",
"license": "MIT",
"peer": true,
"dependencies": {
"@alcalzone/ansi-tokenize": "^0.2.0",
"ansi-escapes": "^7.0.0",
@@ -10950,6 +10950,7 @@
"resolved": "https://registry.npmjs.org/methods/-/methods-1.1.2.tgz",
"integrity": "sha512-iclAHeNqNm68zFtnZ0e+1L2yUIdvzNoauKU4WBA3VvH/vPFieF7qfRlwUZU+DA9P9bPXIS90ulxoUoCH23sV2w==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">= 0.6"
}
@@ -12157,7 +12158,8 @@
"version": "0.1.12",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.12.tgz",
"integrity": "sha512-RA1GjUVMnvYFxuqovrEqZoxxW5NUZqbwKtYz/Tt7nXerk0LbLblQmrsgdeOxV5SFHf0UDggjS/bSeOZwt1pmEQ==",
"license": "MIT"
"license": "MIT",
"peer": true
},
"node_modules/path-type": {
"version": "3.0.0",
@@ -12661,7 +12663,6 @@
"resolved": "https://registry.npmjs.org/react/-/react-19.1.0.tgz",
"integrity": "sha512-FS+XFBNvn3GTAWq26joslQgWNoFu08F4kl0J4CgdNKADkdSGXQyTCnKteIAJy96Br6YbpEU1LSzV5dYtjMkMDg==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">=0.10.0"
}
@@ -12672,7 +12673,6 @@
"integrity": "sha512-cq/o30z9W2Wb4rzBefjv5fBalHU0rJGZCHAkf/RHSBWSSYwh8PlQTqqOJmgIIbBtpj27T6FIPXeomIjZtCNVqA==",
"devOptional": true,
"license": "MIT",
"peer": true,
"dependencies": {
"shell-quote": "^1.6.1",
"ws": "^7"
@@ -12706,7 +12706,6 @@
"integrity": "sha512-Xs1hdnE+DyKgeHJeJznQmYMIBG3TKIHJJT95Q58nHLSrElKlGQqDTR2HQ9fx5CN/Gk6Vh/kupBTDLU11/nDk/g==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"scheduler": "^0.26.0"
},
@@ -14516,7 +14515,6 @@
"integrity": "sha512-M7BAV6Rlcy5u+m6oPhAPFgJTzAioX/6B0DxyvDlo9l8+T3nLKbrczg2WLUyzd45L8RqfUMyGPzekbMvX2Ldkwg==",
"dev": true,
"license": "MIT",
"peer": true,
"engines": {
"node": ">=12"
},
@@ -14690,8 +14688,7 @@
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
"integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
"dev": true,
"license": "0BSD",
"peer": true
"license": "0BSD"
},
"node_modules/tsx": {
"version": "4.20.3",
@@ -14699,7 +14696,6 @@
"integrity": "sha512-qjbnuR9Tr+FJOMBqJCW5ehvIo/buZq7vH7qD7JziU98h6l3qGy0a/yPFjwO+y0/T7GFpNgNAvEcPPVfyT8rrPQ==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"esbuild": "~0.25.0",
"get-tsconfig": "^4.7.5"
@@ -14884,7 +14880,6 @@
"integrity": "sha512-p1diW6TqL9L07nNxvRMM7hMMw4c5XOo/1ibL4aAIGmSAt9slTE1Xgw5KWuof2uTOvCg9BY7ZRi+GaF+7sfgPeQ==",
"dev": true,
"license": "Apache-2.0",
"peer": true,
"bin": {
"tsc": "bin/tsc",
"tsserver": "bin/tsserver"
@@ -15154,6 +15149,7 @@
"resolved": "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz",
"integrity": "sha512-pMZTvIkT1d+TFGvDOqodOclx0QWkkgi6Tdoa8gC8ffGAAqz9pzPTZWAybbsHHoED/ztMtkv/VoYTYyShUn81hA==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">= 0.4.0"
}
@@ -15209,7 +15205,6 @@
"integrity": "sha512-ixXJB1YRgDIw2OszKQS9WxGHKwLdCsbQNkpJN171udl6szi/rIySHL6/Os3s2+oE4P/FLD4dxg4mD7Wust+u5g==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"esbuild": "^0.25.0",
"fdir": "^6.4.6",
@@ -15323,7 +15318,6 @@
"integrity": "sha512-M7BAV6Rlcy5u+m6oPhAPFgJTzAioX/6B0DxyvDlo9l8+T3nLKbrczg2WLUyzd45L8RqfUMyGPzekbMvX2Ldkwg==",
"dev": true,
"license": "MIT",
"peer": true,
"engines": {
"node": ">=12"
},
@@ -15337,7 +15331,6 @@
"integrity": "sha512-LUCP5ev3GURDysTWiP47wRRUpLKMOfPh+yKTx3kVIEiu5KOMeqzpnYNsKyOoVrULivR8tLcks4+lga33Whn90A==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@types/chai": "^5.2.2",
"@vitest/expect": "3.2.4",
@@ -16016,7 +16009,6 @@
"resolved": "https://registry.npmjs.org/zod/-/zod-3.25.76.tgz",
"integrity": "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ==",
"license": "MIT",
"peer": true,
"funding": {
"url": "https://github.com/sponsors/colinhacks"
}
@@ -16032,7 +16024,7 @@
},
"packages/cli": {
"name": "@qwen-code/qwen-code",
"version": "0.1.5",
"version": "0.1.2",
"dependencies": {
"@google/genai": "1.16.0",
"@iarna/toml": "^2.2.5",
@@ -16147,7 +16139,7 @@
},
"packages/core": {
"name": "@qwen-code/qwen-code-core",
"version": "0.1.5",
"version": "0.1.2",
"hasInstallScript": true,
"dependencies": {
"@google/genai": "1.16.0",
@@ -16277,7 +16269,6 @@
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz",
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">=12"
},
@@ -16287,7 +16278,7 @@
},
"packages/test-utils": {
"name": "@qwen-code/qwen-code-test-utils",
"version": "0.1.5",
"version": "0.1.2",
"dev": true,
"license": "Apache-2.0",
"devDependencies": {
@@ -16299,7 +16290,7 @@
},
"packages/vscode-ide-companion": {
"name": "qwen-code-vscode-ide-companion",
"version": "0.1.5",
"version": "0.1.2",
"license": "LICENSE",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.15.1",

View File

@@ -1,6 +1,6 @@
{
"name": "@qwen-code/qwen-code",
"version": "0.1.5",
"version": "0.1.2",
"engines": {
"node": ">=20.0.0"
},
@@ -13,7 +13,7 @@
"url": "git+https://github.com/QwenLM/qwen-code.git"
},
"config": {
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.1.5"
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.1.2"
},
"scripts": {
"start": "cross-env node scripts/start.js",

View File

@@ -1,6 +1,6 @@
{
"name": "@qwen-code/qwen-code",
"version": "0.1.5",
"version": "0.1.2",
"description": "Qwen Code",
"repository": {
"type": "git",
@@ -25,7 +25,7 @@
"dist"
],
"config": {
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.1.5"
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.1.2"
},
"dependencies": {
"@google/genai": "1.16.0",

View File

@@ -2399,73 +2399,6 @@ describe('loadCliConfig useRipgrep', () => {
});
});
describe('loadCliConfig useBuiltinRipgrep', () => {
const originalArgv = process.argv;
beforeEach(() => {
vi.resetAllMocks();
vi.mocked(os.homedir).mockReturnValue('/mock/home/user');
vi.stubEnv('GEMINI_API_KEY', 'test-api-key');
});
afterEach(() => {
process.argv = originalArgv;
vi.unstubAllEnvs();
vi.restoreAllMocks();
});
it('should be true by default when useBuiltinRipgrep is not set in settings', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments({} as Settings);
const settings: Settings = {};
const config = await loadCliConfig(
settings,
[],
new ExtensionEnablementManager(
ExtensionStorage.getUserExtensionsDir(),
argv.extensions,
),
'test-session',
argv,
);
expect(config.getUseBuiltinRipgrep()).toBe(true);
});
it('should be false when useBuiltinRipgrep is set to false in settings', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments({} as Settings);
const settings: Settings = { tools: { useBuiltinRipgrep: false } };
const config = await loadCliConfig(
settings,
[],
new ExtensionEnablementManager(
ExtensionStorage.getUserExtensionsDir(),
argv.extensions,
),
'test-session',
argv,
);
expect(config.getUseBuiltinRipgrep()).toBe(false);
});
it('should be true when useBuiltinRipgrep is explicitly set to true in settings', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments({} as Settings);
const settings: Settings = { tools: { useBuiltinRipgrep: true } };
const config = await loadCliConfig(
settings,
[],
new ExtensionEnablementManager(
ExtensionStorage.getUserExtensionsDir(),
argv.extensions,
),
'test-session',
argv,
);
expect(config.getUseBuiltinRipgrep()).toBe(true);
});
});
describe('screenReader configuration', () => {
const originalArgv = process.argv;

View File

@@ -42,7 +42,6 @@ import { mcpCommand } from '../commands/mcp.js';
import { isWorkspaceTrusted } from './trustedFolders.js';
import type { ExtensionEnablementManager } from './extensions/extensionEnablement.js';
import { buildWebSearchConfig } from './webSearch.js';
// Simple console logger for now - replace with actual logger if available
const logger = {
@@ -114,13 +113,9 @@ export interface CliArgs {
openaiLogging: boolean | undefined;
openaiApiKey: string | undefined;
openaiBaseUrl: string | undefined;
openaiLoggingDir: string | undefined;
proxy: string | undefined;
includeDirectories: string[] | undefined;
tavilyApiKey: string | undefined;
googleApiKey: string | undefined;
googleSearchEngineId: string | undefined;
webSearchDefault: string | undefined;
screenReader: boolean | undefined;
vlmSwitchMode: string | undefined;
useSmartEdit: boolean | undefined;
@@ -198,13 +193,14 @@ export async function parseArguments(settings: Settings): Promise<CliArgs> {
})
.option('proxy', {
type: 'string',
description: 'Proxy for Qwen Code, like schema://user:password@host:port',
description:
'Proxy for Qwen Code, like schema://user:password@host:port',
})
.deprecateOption(
'proxy',
'Use the "proxy" setting in settings.json instead. This flag will be removed in a future version.',
)
.command('$0 [query..]', 'Launch Qwen Code CLI', (yargsInstance: Argv) =>
.command('$0 [query..]', 'Launch Gemini CLI', (yargsInstance: Argv) =>
yargsInstance
.positional('query', {
description:
@@ -318,11 +314,6 @@ export async function parseArguments(settings: Settings): Promise<CliArgs> {
description:
'Enable logging of OpenAI API calls for debugging and analysis',
})
.option('openai-logging-dir', {
type: 'string',
description:
'Custom directory path for OpenAI API logs. Overrides settings files.',
})
.option('openai-api-key', {
type: 'string',
description: 'OpenAI API key to use for authentication',
@@ -333,20 +324,7 @@ export async function parseArguments(settings: Settings): Promise<CliArgs> {
})
.option('tavily-api-key', {
type: 'string',
description: 'Tavily API key for web search',
})
.option('google-api-key', {
type: 'string',
description: 'Google Custom Search API key',
})
.option('google-search-engine-id', {
type: 'string',
description: 'Google Custom Search Engine ID',
})
.option('web-search-default', {
type: 'string',
description:
'Default web search provider (dashscope, tavily, google)',
description: 'Tavily API key for web search functionality',
})
.option('screen-reader', {
type: 'boolean',
@@ -770,15 +748,12 @@ export async function loadCliConfig(
(typeof argv.openaiLogging === 'undefined'
? settings.model?.enableOpenAILogging
: argv.openaiLogging) ?? false,
openAILoggingDir:
argv.openaiLoggingDir || settings.model?.openAILoggingDir,
},
cliVersion: await getCliVersion(),
webSearch: buildWebSearchConfig(
argv,
settings,
settings.security?.auth?.selectedType,
),
tavilyApiKey:
argv.tavilyApiKey ||
settings.advanced?.tavilyApiKey ||
process.env['TAVILY_API_KEY'],
summarizeToolOutput: settings.model?.summarizeToolOutput,
ideMode,
chatCompression: settings.model?.chatCompression,
@@ -786,7 +761,6 @@ export async function loadCliConfig(
interactive,
trustedFolder,
useRipgrep: settings.tools?.useRipgrep,
useBuiltinRipgrep: settings.tools?.useBuiltinRipgrep,
shouldUseNodePtyShell: settings.tools?.shell?.enableInteractiveShell,
skipNextSpeakerCheck: settings.model?.skipNextSpeakerCheck,
enablePromptCompletion: settings.general?.enablePromptCompletion ?? false,

View File

@@ -66,8 +66,6 @@ import {
loadEnvironment,
migrateDeprecatedSettings,
SettingScope,
SETTINGS_VERSION,
SETTINGS_VERSION_KEY,
} from './settings.js';
import { FatalConfigError, QWEN_DIR } from '@qwen-code/qwen-code-core';
@@ -96,7 +94,6 @@ vi.mock('fs', async (importOriginal) => {
existsSync: vi.fn(),
readFileSync: vi.fn(),
writeFileSync: vi.fn(),
renameSync: vi.fn(),
mkdirSync: vi.fn(),
realpathSync: (p: string) => p,
};
@@ -174,15 +171,11 @@ describe('Settings Loading and Merging', () => {
getSystemSettingsPath(),
'utf-8',
);
expect(settings.system.settings).toEqual({
...systemSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
expect(settings.system.settings).toEqual(systemSettingsContent);
expect(settings.user.settings).toEqual({});
expect(settings.workspace.settings).toEqual({});
expect(settings.merged).toEqual({
...systemSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
});
@@ -214,14 +207,10 @@ describe('Settings Loading and Merging', () => {
expectedUserSettingsPath,
'utf-8',
);
expect(settings.user.settings).toEqual({
...userSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
expect(settings.user.settings).toEqual(userSettingsContent);
expect(settings.workspace.settings).toEqual({});
expect(settings.merged).toEqual({
...userSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
});
@@ -252,13 +241,9 @@ describe('Settings Loading and Merging', () => {
'utf-8',
);
expect(settings.user.settings).toEqual({});
expect(settings.workspace.settings).toEqual({
...workspaceSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
expect(settings.workspace.settings).toEqual(workspaceSettingsContent);
expect(settings.merged).toEqual({
...workspaceSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
});
@@ -319,20 +304,10 @@ describe('Settings Loading and Merging', () => {
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.system.settings).toEqual({
...systemSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
expect(settings.user.settings).toEqual({
...userSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
expect(settings.workspace.settings).toEqual({
...workspaceSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
expect(settings.system.settings).toEqual(systemSettingsContent);
expect(settings.user.settings).toEqual(userSettingsContent);
expect(settings.workspace.settings).toEqual(workspaceSettingsContent);
expect(settings.merged).toEqual({
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
ui: {
theme: 'system-theme',
},
@@ -386,7 +361,6 @@ describe('Settings Loading and Merging', () => {
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.merged).toEqual({
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
ui: {
theme: 'legacy-dark',
},
@@ -439,132 +413,6 @@ describe('Settings Loading and Merging', () => {
expect((settings.merged as TestSettings)['allowedTools']).toBeUndefined();
});
it('should add version field to migrated settings file', () => {
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
);
const legacySettingsContent = {
theme: 'dark',
model: 'qwen-coder',
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(legacySettingsContent);
return '{}';
},
);
loadSettings(MOCK_WORKSPACE_DIR);
// Verify that fs.writeFileSync was called with migrated settings including version
expect(fs.writeFileSync).toHaveBeenCalled();
const writeCall = (fs.writeFileSync as Mock).mock.calls[0];
const writtenContent = JSON.parse(writeCall[1] as string);
expect(writtenContent[SETTINGS_VERSION_KEY]).toBe(SETTINGS_VERSION);
});
it('should not re-migrate settings that have version field', () => {
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
);
const migratedSettingsContent = {
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
ui: {
theme: 'dark',
},
model: {
name: 'qwen-coder',
},
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(migratedSettingsContent);
return '{}';
},
);
loadSettings(MOCK_WORKSPACE_DIR);
// Verify that fs.renameSync and fs.writeFileSync were NOT called
// (because no migration was needed)
expect(fs.renameSync).not.toHaveBeenCalled();
expect(fs.writeFileSync).not.toHaveBeenCalled();
});
it('should add version field to V2 settings without version and write to disk', () => {
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
);
// V2 format but no version field
const v2SettingsWithoutVersion = {
ui: {
theme: 'dark',
},
model: {
name: 'qwen-coder',
},
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(v2SettingsWithoutVersion);
return '{}';
},
);
loadSettings(MOCK_WORKSPACE_DIR);
// Verify that fs.writeFileSync was called (to add version)
// but NOT fs.renameSync (no backup needed, just adding version)
expect(fs.renameSync).not.toHaveBeenCalled();
expect(fs.writeFileSync).toHaveBeenCalledTimes(1);
const writeCall = (fs.writeFileSync as Mock).mock.calls[0];
const writtenPath = writeCall[0];
const writtenContent = JSON.parse(writeCall[1] as string);
expect(writtenPath).toBe(USER_SETTINGS_PATH);
expect(writtenContent[SETTINGS_VERSION_KEY]).toBe(SETTINGS_VERSION);
expect(writtenContent.ui?.theme).toBe('dark');
expect(writtenContent.model?.name).toBe('qwen-coder');
});
it('should correctly handle partially migrated settings without version field', () => {
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
);
// Edge case: model already in V2 format (object), but autoAccept in V1 format
const partiallyMigratedContent = {
model: {
name: 'qwen-coder',
},
autoAccept: false, // V1 key
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(partiallyMigratedContent);
return '{}';
},
);
loadSettings(MOCK_WORKSPACE_DIR);
// Verify that the migrated settings preserve the model object correctly
expect(fs.writeFileSync).toHaveBeenCalled();
const writeCall = (fs.writeFileSync as Mock).mock.calls[0];
const writtenContent = JSON.parse(writeCall[1] as string);
// Model should remain as an object, not double-nested
expect(writtenContent.model).toEqual({ name: 'qwen-coder' });
// autoAccept should be migrated to tools.autoAccept
expect(writtenContent.tools?.autoAccept).toBe(false);
// Version field should be added
expect(writtenContent[SETTINGS_VERSION_KEY]).toBe(SETTINGS_VERSION);
});
it('should correctly merge and migrate legacy array properties from multiple scopes', () => {
(mockFsExistsSync as Mock).mockReturnValue(true);
const legacyUserSettings = {
@@ -667,24 +515,11 @@ describe('Settings Loading and Merging', () => {
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.systemDefaults.settings).toEqual({
...systemDefaultsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
expect(settings.system.settings).toEqual({
...systemSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
expect(settings.user.settings).toEqual({
...userSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
expect(settings.workspace.settings).toEqual({
...workspaceSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
expect(settings.systemDefaults.settings).toEqual(systemDefaultsContent);
expect(settings.system.settings).toEqual(systemSettingsContent);
expect(settings.user.settings).toEqual(userSettingsContent);
expect(settings.workspace.settings).toEqual(workspaceSettingsContent);
expect(settings.merged).toEqual({
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
context: {
fileName: 'WORKSPACE_CONTEXT.md',
includeDirectories: [
@@ -1031,14 +866,8 @@ describe('Settings Loading and Merging', () => {
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.user.settings).toEqual({
...userSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
expect(settings.workspace.settings).toEqual({
...workspaceSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
expect(settings.user.settings).toEqual(userSettingsContent);
expect(settings.workspace.settings).toEqual(workspaceSettingsContent);
expect(settings.merged.mcpServers).toEqual({
'user-server': {
command: 'user-command',
@@ -1867,13 +1696,9 @@ describe('Settings Loading and Merging', () => {
'utf-8',
);
expect(settings.system.path).toBe(MOCK_ENV_SYSTEM_SETTINGS_PATH);
expect(settings.system.settings).toEqual({
...systemSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
expect(settings.system.settings).toEqual(systemSettingsContent);
expect(settings.merged).toEqual({
...systemSettingsContent,
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
});
});
});
@@ -2423,44 +2248,6 @@ describe('Settings Loading and Merging', () => {
customWittyPhrases: ['test phrase'],
});
});
it('should remove version field when migrating to V1', () => {
const v2Settings = {
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
ui: {
theme: 'dark',
},
model: {
name: 'qwen-coder',
},
};
const v1Settings = migrateSettingsToV1(v2Settings);
// Version field should not be present in V1 settings
expect(v1Settings[SETTINGS_VERSION_KEY]).toBeUndefined();
// Other fields should be properly migrated
expect(v1Settings).toEqual({
theme: 'dark',
model: 'qwen-coder',
});
});
it('should handle version field in unrecognized properties', () => {
const v2Settings = {
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
general: {
vimMode: true,
},
someUnrecognizedKey: 'value',
};
const v1Settings = migrateSettingsToV1(v2Settings);
// Version field should be filtered out
expect(v1Settings[SETTINGS_VERSION_KEY]).toBeUndefined();
// Unrecognized keys should be preserved
expect(v1Settings['someUnrecognizedKey']).toBe('value');
expect(v1Settings['vimMode']).toBe(true);
});
});
describe('loadEnvironment', () => {
@@ -2581,73 +2368,6 @@ describe('Settings Loading and Merging', () => {
};
expect(needsMigration(settings)).toBe(false);
});
describe('with version field', () => {
it('should return false when version field indicates current or newer version', () => {
const settingsWithVersion = {
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
theme: 'dark', // Even though this is a V1 key, version field takes precedence
};
expect(needsMigration(settingsWithVersion)).toBe(false);
});
it('should return false when version field indicates a newer version', () => {
const settingsWithNewerVersion = {
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION + 1,
theme: 'dark',
};
expect(needsMigration(settingsWithNewerVersion)).toBe(false);
});
it('should return true when version field indicates an older version', () => {
const settingsWithOldVersion = {
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION - 1,
theme: 'dark',
};
expect(needsMigration(settingsWithOldVersion)).toBe(true);
});
it('should use fallback logic when version field is not a number', () => {
const settingsWithInvalidVersion = {
[SETTINGS_VERSION_KEY]: 'not-a-number',
theme: 'dark',
};
expect(needsMigration(settingsWithInvalidVersion)).toBe(true);
});
it('should use fallback logic when version field is missing', () => {
const settingsWithoutVersion = {
theme: 'dark',
};
expect(needsMigration(settingsWithoutVersion)).toBe(true);
});
});
describe('edge case: partially migrated settings', () => {
it('should return true for partially migrated settings without version field', () => {
// This simulates the dangerous edge case: model already in V2 format,
// but other fields in V1 format
const partiallyMigrated = {
model: {
name: 'qwen-coder',
},
autoAccept: false, // V1 key
};
expect(needsMigration(partiallyMigrated)).toBe(true);
});
it('should return false for partially migrated settings WITH version field', () => {
// With version field, we trust that it's been properly migrated
const partiallyMigratedWithVersion = {
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
model: {
name: 'qwen-coder',
},
autoAccept: false, // This would look like V1 but version says it's V2
};
expect(needsMigration(partiallyMigratedWithVersion)).toBe(false);
});
});
});
describe('migrateDeprecatedSettings', () => {

View File

@@ -56,10 +56,6 @@ export const DEFAULT_EXCLUDED_ENV_VARS = ['DEBUG', 'DEBUG_MODE'];
const MIGRATE_V2_OVERWRITE = true;
// Settings version to track migration state
export const SETTINGS_VERSION = 2;
export const SETTINGS_VERSION_KEY = '$version';
const MIGRATION_MAP: Record<string, string> = {
accessibility: 'ui.accessibility',
allowedTools: 'tools.allowed',
@@ -220,16 +216,8 @@ function setNestedProperty(
}
export function needsMigration(settings: Record<string, unknown>): boolean {
// Check version field first - if present and matches current version, no migration needed
if (SETTINGS_VERSION_KEY in settings) {
const version = settings[SETTINGS_VERSION_KEY];
if (typeof version === 'number' && version >= SETTINGS_VERSION) {
return false;
}
}
// Fallback to legacy detection: A file needs migration if it contains any
// top-level key that is moved to a nested location in V2.
// A file needs migration if it contains any top-level key that is moved to a
// nested location in V2.
const hasV1Keys = Object.entries(MIGRATION_MAP).some(([v1Key, v2Path]) => {
if (v1Key === v2Path || !(v1Key in settings)) {
return false;
@@ -262,21 +250,6 @@ function migrateSettingsToV2(
for (const [oldKey, newPath] of Object.entries(MIGRATION_MAP)) {
if (flatKeys.has(oldKey)) {
// Safety check: If this key is a V2 container (like 'model') and it's
// already an object, it's likely already in V2 format. Skip migration
// to prevent double-nesting (e.g., model.name.name).
if (
KNOWN_V2_CONTAINERS.has(oldKey) &&
typeof flatSettings[oldKey] === 'object' &&
flatSettings[oldKey] !== null &&
!Array.isArray(flatSettings[oldKey])
) {
// This is already a V2 container, carry it over as-is
v2Settings[oldKey] = flatSettings[oldKey];
flatKeys.delete(oldKey);
continue;
}
setNestedProperty(v2Settings, newPath, flatSettings[oldKey]);
flatKeys.delete(oldKey);
}
@@ -314,9 +287,6 @@ function migrateSettingsToV2(
}
}
// Set version field to indicate this is a V2 settings file
v2Settings[SETTINGS_VERSION_KEY] = SETTINGS_VERSION;
return v2Settings;
}
@@ -366,11 +336,6 @@ export function migrateSettingsToV1(
// Carry over any unrecognized keys
for (const remainingKey of v2Keys) {
// Skip the version field - it's only for V2 format
if (remainingKey === SETTINGS_VERSION_KEY) {
continue;
}
const value = v2Settings[remainingKey];
if (value === undefined) {
continue;
@@ -656,22 +621,6 @@ export function loadSettings(
}
settingsObject = migratedSettings;
}
} else if (!(SETTINGS_VERSION_KEY in settingsObject)) {
// No migration needed, but version field is missing - add it for future optimizations
settingsObject[SETTINGS_VERSION_KEY] = SETTINGS_VERSION;
if (MIGRATE_V2_OVERWRITE) {
try {
fs.writeFileSync(
filePath,
JSON.stringify(settingsObject, null, 2),
'utf-8',
);
} catch (e) {
console.error(
`Error adding version to settings file: ${getErrorMessage(e)}`,
);
}
}
}
return { settings: settingsObject as Settings, rawJson: content };
}

View File

@@ -558,16 +558,6 @@ const SETTINGS_SCHEMA = {
description: 'Enable OpenAI logging.',
showInDialog: true,
},
openAILoggingDir: {
type: 'string',
label: 'OpenAI Logging Directory',
category: 'Model',
requiresRestart: false,
default: undefined as string | undefined,
description:
'Custom directory path for OpenAI API logs. If not specified, defaults to logs/openai in the current working directory.',
showInDialog: true,
},
generationConfig: {
type: 'object',
label: 'Generation Configuration',
@@ -857,16 +847,6 @@ const SETTINGS_SCHEMA = {
'Use ripgrep for file content search instead of the fallback implementation. Provides faster search performance.',
showInDialog: true,
},
useBuiltinRipgrep: {
type: 'boolean',
label: 'Use Builtin Ripgrep',
category: 'Tools',
requiresRestart: false,
default: true,
description:
'Use the bundled ripgrep binary. When set to false, the system-level "rg" command will be used instead. This setting is only effective when useRipgrep is true.',
showInDialog: true,
},
enableToolOutputTruncation: {
type: 'boolean',
label: 'Enable Tool Output Truncation',
@@ -1082,36 +1062,17 @@ const SETTINGS_SCHEMA = {
},
tavilyApiKey: {
type: 'string',
label: 'Tavily API Key (Deprecated)',
label: 'Tavily API Key',
category: 'Advanced',
requiresRestart: false,
default: undefined as string | undefined,
description:
'⚠️ DEPRECATED: Please use webSearch.provider configuration instead. Legacy API key for the Tavily API.',
'The API key for the Tavily API. Required to enable the web_search tool functionality.',
showInDialog: false,
},
},
},
webSearch: {
type: 'object',
label: 'Web Search',
category: 'Advanced',
requiresRestart: true,
default: undefined as
| {
provider: Array<{
type: 'tavily' | 'google' | 'dashscope';
apiKey?: string;
searchEngineId?: string;
}>;
default: string;
}
| undefined,
description: 'Configuration for web search providers.',
showInDialog: false,
},
experimental: {
type: 'object',
label: 'Experimental',

View File

@@ -1,121 +0,0 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
import { AuthType } from '@qwen-code/qwen-code-core';
import type { WebSearchProviderConfig } from '@qwen-code/qwen-code-core';
import type { Settings } from './settings.js';
/**
* CLI arguments related to web search configuration
*/
export interface WebSearchCliArgs {
tavilyApiKey?: string;
googleApiKey?: string;
googleSearchEngineId?: string;
webSearchDefault?: string;
}
/**
* Web search configuration structure
*/
export interface WebSearchConfig {
provider: WebSearchProviderConfig[];
default: string;
}
/**
* Build webSearch configuration from multiple sources with priority:
* 1. settings.json (new format) - highest priority
* 2. Command line args + environment variables
* 3. Legacy tavilyApiKey (backward compatibility)
*
* @param argv - Command line arguments
* @param settings - User settings from settings.json
* @param authType - Authentication type (e.g., 'qwen-oauth')
* @returns WebSearch configuration or undefined if no providers available
*/
export function buildWebSearchConfig(
argv: WebSearchCliArgs,
settings: Settings,
authType?: string,
): WebSearchConfig | undefined {
const isQwenOAuth = authType === AuthType.QWEN_OAUTH;
// Step 1: Collect providers from settings or command line/env
let providers: WebSearchProviderConfig[] = [];
let userDefault: string | undefined;
if (settings.webSearch) {
// Use providers from settings.json
providers = [...settings.webSearch.provider];
userDefault = settings.webSearch.default;
} else {
// Build providers from command line args and environment variables
const tavilyKey =
argv.tavilyApiKey ||
settings.advanced?.tavilyApiKey ||
process.env['TAVILY_API_KEY'];
if (tavilyKey) {
providers.push({
type: 'tavily',
apiKey: tavilyKey,
} as WebSearchProviderConfig);
}
const googleKey = argv.googleApiKey || process.env['GOOGLE_API_KEY'];
const googleEngineId =
argv.googleSearchEngineId || process.env['GOOGLE_SEARCH_ENGINE_ID'];
if (googleKey && googleEngineId) {
providers.push({
type: 'google',
apiKey: googleKey,
searchEngineId: googleEngineId,
} as WebSearchProviderConfig);
}
}
// Step 2: Ensure dashscope is available for qwen-oauth users
if (isQwenOAuth) {
const hasDashscope = providers.some((p) => p.type === 'dashscope');
if (!hasDashscope) {
providers.push({ type: 'dashscope' } as WebSearchProviderConfig);
}
}
// Step 3: If no providers available, return undefined
if (providers.length === 0) {
return undefined;
}
// Step 4: Determine default provider
// Priority: user explicit config > CLI arg > first available provider (tavily > google > dashscope)
const providerPriority: Array<'tavily' | 'google' | 'dashscope'> = [
'tavily',
'google',
'dashscope',
];
// Determine default provider based on availability
let defaultProvider = userDefault || argv.webSearchDefault;
if (!defaultProvider) {
// Find first available provider by priority order
for (const providerType of providerPriority) {
if (providers.some((p) => p.type === providerType)) {
defaultProvider = providerType;
break;
}
}
// Fallback to first available provider if none found in priority list
if (!defaultProvider) {
defaultProvider = providers[0]?.type || 'dashscope';
}
}
return {
provider: providers,
default: defaultProvider,
};
}

View File

@@ -327,13 +327,9 @@ describe('gemini.tsx main function kitty protocol', () => {
openaiLogging: undefined,
openaiApiKey: undefined,
openaiBaseUrl: undefined,
openaiLoggingDir: undefined,
proxy: undefined,
includeDirectories: undefined,
tavilyApiKey: undefined,
googleApiKey: undefined,
googleSearchEngineId: undefined,
webSearchDefault: undefined,
screenReader: undefined,
vlmSwitchMode: undefined,
useSmartEdit: undefined,

View File

@@ -387,11 +387,7 @@ export async function main() {
let input = config.getQuestion();
const startupWarnings = [
...(await getStartupWarnings()),
...(await getUserStartupWarnings({
workspaceRoot: process.cwd(),
useRipgrep: settings.merged.tools?.useRipgrep ?? true,
useBuiltinRipgrep: settings.merged.tools?.useBuiltinRipgrep ?? true,
})),
...(await getUserStartupWarnings()),
];
// Render UI, passing necessary config values. Check that there is no command line question.

View File

@@ -1227,28 +1227,4 @@ describe('FileCommandLoader', () => {
expect(commands).toHaveLength(0);
});
});
describe('AbortError handling', () => {
it('should silently ignore AbortError when operation is cancelled', async () => {
const userCommandsDir = Storage.getUserCommandsDir();
mock({
[userCommandsDir]: {
'test1.toml': 'prompt = "Prompt 1"',
'test2.toml': 'prompt = "Prompt 2"',
},
});
const loader = new FileCommandLoader(null);
const controller = new AbortController();
const signal = controller.signal;
// Start loading and immediately abort
const loadPromise = loader.loadCommands(signal);
controller.abort();
// Should not throw or print errors
const commands = await loadPromise;
expect(commands).toHaveLength(0);
});
});
});

View File

@@ -120,11 +120,7 @@ export class FileCommandLoader implements ICommandLoader {
// Add all commands without deduplication
allCommands.push(...commands);
} catch (error) {
// Ignore ENOENT (directory doesn't exist) and AbortError (operation was cancelled)
const isEnoent = (error as NodeJS.ErrnoException).code === 'ENOENT';
const isAbortError =
error instanceof Error && error.name === 'AbortError';
if (!isEnoent && !isAbortError) {
if ((error as NodeJS.ErrnoException).code !== 'ENOENT') {
console.error(
`[FileCommandLoader] Error loading commands from ${dirInfo.path}:`,
error,

View File

@@ -916,9 +916,17 @@ export const AppContainer = (props: AppContainerProps) => {
(result: IdeIntegrationNudgeResult) => {
if (result.userSelection === 'yes') {
handleSlashCommand('/ide install');
settings.setValue(SettingScope.User, 'ide.hasSeenNudge', true);
settings.setValue(
SettingScope.User,
'hasSeenIdeIntegrationNudge',
true,
);
} else if (result.userSelection === 'dismiss') {
settings.setValue(SettingScope.User, 'ide.hasSeenNudge', true);
settings.setValue(
SettingScope.User,
'hasSeenIdeIntegrationNudge',
true,
);
}
setIdePromptAnswered(true);
},

View File

@@ -23,7 +23,7 @@ export const ToolsList: React.FC<ToolsListProps> = ({
}) => (
<Box flexDirection="column" marginBottom={1}>
<Text bold color={theme.text.primary}>
Available Qwen Code CLI tools:
Available Gemini CLI tools:
</Text>
<Box height={1} />
{tools.length > 0 ? (

View File

@@ -1,7 +1,7 @@
// Vitest Snapshot v1, https://vitest.dev/guide/snapshot.html
exports[`<ToolsList /> > renders correctly with descriptions 1`] = `
"Available Qwen Code CLI tools:
"Available Gemini CLI tools:
- Test Tool One (test-tool-one)
This is the first test tool.
@@ -16,14 +16,14 @@ exports[`<ToolsList /> > renders correctly with descriptions 1`] = `
`;
exports[`<ToolsList /> > renders correctly with no tools 1`] = `
"Available Qwen Code CLI tools:
"Available Gemini CLI tools:
No tools available
"
`;
exports[`<ToolsList /> > renders correctly without descriptions 1`] = `
"Available Qwen Code CLI tools:
"Available Gemini CLI tools:
- Test Tool One
- Test Tool Two

View File

@@ -22,22 +22,12 @@ vi.mock('os', async (importOriginal) => {
describe('getUserStartupWarnings', () => {
let testRootDir: string;
let homeDir: string;
let startupOptions: {
workspaceRoot: string;
useRipgrep: boolean;
useBuiltinRipgrep: boolean;
};
beforeEach(async () => {
testRootDir = await fs.mkdtemp(path.join(os.tmpdir(), 'warnings-test-'));
homeDir = path.join(testRootDir, 'home');
await fs.mkdir(homeDir, { recursive: true });
vi.mocked(os.homedir).mockReturnValue(homeDir);
startupOptions = {
workspaceRoot: testRootDir,
useRipgrep: true,
useBuiltinRipgrep: true,
};
});
afterEach(async () => {
@@ -47,10 +37,7 @@ describe('getUserStartupWarnings', () => {
describe('home directory check', () => {
it('should return a warning when running in home directory', async () => {
const warnings = await getUserStartupWarnings({
...startupOptions,
workspaceRoot: homeDir,
});
const warnings = await getUserStartupWarnings(homeDir);
expect(warnings).toContainEqual(
expect.stringContaining('home directory'),
);
@@ -59,10 +46,7 @@ describe('getUserStartupWarnings', () => {
it('should not return a warning when running in a project directory', async () => {
const projectDir = path.join(testRootDir, 'project');
await fs.mkdir(projectDir);
const warnings = await getUserStartupWarnings({
...startupOptions,
workspaceRoot: projectDir,
});
const warnings = await getUserStartupWarnings(projectDir);
expect(warnings).not.toContainEqual(
expect.stringContaining('home directory'),
);
@@ -72,10 +56,7 @@ describe('getUserStartupWarnings', () => {
describe('root directory check', () => {
it('should return a warning when running in a root directory', async () => {
const rootDir = path.parse(testRootDir).root;
const warnings = await getUserStartupWarnings({
...startupOptions,
workspaceRoot: rootDir,
});
const warnings = await getUserStartupWarnings(rootDir);
expect(warnings).toContainEqual(
expect.stringContaining('root directory'),
);
@@ -87,10 +68,7 @@ describe('getUserStartupWarnings', () => {
it('should not return a warning when running in a non-root directory', async () => {
const projectDir = path.join(testRootDir, 'project');
await fs.mkdir(projectDir);
const warnings = await getUserStartupWarnings({
...startupOptions,
workspaceRoot: projectDir,
});
const warnings = await getUserStartupWarnings(projectDir);
expect(warnings).not.toContainEqual(
expect.stringContaining('root directory'),
);
@@ -100,10 +78,7 @@ describe('getUserStartupWarnings', () => {
describe('error handling', () => {
it('should handle errors when checking directory', async () => {
const nonExistentPath = path.join(testRootDir, 'non-existent');
const warnings = await getUserStartupWarnings({
...startupOptions,
workspaceRoot: nonExistentPath,
});
const warnings = await getUserStartupWarnings(nonExistentPath);
const expectedWarning =
'Could not verify the current directory due to a file system error.';
expect(warnings).toEqual([expectedWarning, expectedWarning]);

View File

@@ -7,26 +7,19 @@
import fs from 'node:fs/promises';
import * as os from 'node:os';
import path from 'node:path';
import { canUseRipgrep } from '@qwen-code/qwen-code-core';
type WarningCheckOptions = {
workspaceRoot: string;
useRipgrep: boolean;
useBuiltinRipgrep: boolean;
};
type WarningCheck = {
id: string;
check: (options: WarningCheckOptions) => Promise<string | null>;
check: (workspaceRoot: string) => Promise<string | null>;
};
// Individual warning checks
const homeDirectoryCheck: WarningCheck = {
id: 'home-directory',
check: async (options: WarningCheckOptions) => {
check: async (workspaceRoot: string) => {
try {
const [workspaceRealPath, homeRealPath] = await Promise.all([
fs.realpath(options.workspaceRoot),
fs.realpath(workspaceRoot),
fs.realpath(os.homedir()),
]);
@@ -42,9 +35,9 @@ const homeDirectoryCheck: WarningCheck = {
const rootDirectoryCheck: WarningCheck = {
id: 'root-directory',
check: async (options: WarningCheckOptions) => {
check: async (workspaceRoot: string) => {
try {
const workspaceRealPath = await fs.realpath(options.workspaceRoot);
const workspaceRealPath = await fs.realpath(workspaceRoot);
const errorMessage =
'Warning: You are running Qwen Code in the root directory. Your entire folder structure will be used for context. It is strongly recommended to run in a project-specific directory.';
@@ -60,33 +53,17 @@ const rootDirectoryCheck: WarningCheck = {
},
};
const ripgrepAvailabilityCheck: WarningCheck = {
id: 'ripgrep-availability',
check: async (options: WarningCheckOptions) => {
if (!options.useRipgrep) {
return null;
}
const isAvailable = await canUseRipgrep(options.useBuiltinRipgrep);
if (!isAvailable) {
return 'Ripgrep not available: Please install ripgrep globally to enable faster file content search. Falling back to built-in grep.';
}
return null;
},
};
// All warning checks
const WARNING_CHECKS: readonly WarningCheck[] = [
homeDirectoryCheck,
rootDirectoryCheck,
ripgrepAvailabilityCheck,
];
export async function getUserStartupWarnings(
options: WarningCheckOptions,
workspaceRoot: string = process.cwd(),
): Promise<string[]> {
const results = await Promise.all(
WARNING_CHECKS.map((check) => check.check(options)),
WARNING_CHECKS.map((check) => check.check(workspaceRoot)),
);
return results.filter((msg) => msg !== null);
}

View File

@@ -12,12 +12,6 @@ import type {
GeminiChat,
ToolCallConfirmationDetails,
ToolResult,
SubAgentEventEmitter,
SubAgentToolCallEvent,
SubAgentToolResultEvent,
SubAgentApprovalRequestEvent,
AnyDeclarativeTool,
AnyToolInvocation,
} from '@qwen-code/qwen-code-core';
import {
AuthType,
@@ -34,10 +28,6 @@ import {
getErrorStatus,
isWithinRoot,
isNodeError,
SubAgentEventType,
TaskTool,
Kind,
TodoWriteTool,
} from '@qwen-code/qwen-code-core';
import * as acp from './acp.js';
import { AcpFileSystemService } from './fileSystemService.js';
@@ -413,34 +403,9 @@ class Session {
);
}
// Detect TodoWriteTool early - route to plan updates instead of tool_call events
const isTodoWriteTool =
fc.name === TodoWriteTool.Name || tool.name === TodoWriteTool.Name;
// Declare subAgentToolEventListeners outside try block for cleanup in catch
let subAgentToolEventListeners: Array<() => void> = [];
try {
const invocation = tool.build(args);
// Detect TaskTool and set up sub-agent tool tracking
const isTaskTool = tool.name === TaskTool.Name;
if (isTaskTool && 'eventEmitter' in invocation) {
// Access eventEmitter from TaskTool invocation
const taskEventEmitter = (
invocation as {
eventEmitter: SubAgentEventEmitter;
}
).eventEmitter;
// Set up sub-agent tool tracking
subAgentToolEventListeners = this.setupSubAgentToolTracking(
taskEventEmitter,
abortSignal,
);
}
const confirmationDetails =
await invocation.shouldConfirmExecute(abortSignal);
@@ -495,8 +460,7 @@ class Session {
throw new Error(`Unexpected: ${resultOutcome}`);
}
}
} else if (!isTodoWriteTool) {
// Skip tool_call event for TodoWriteTool
} else {
await this.sendUpdate({
sessionUpdate: 'tool_call',
toolCallId: callId,
@@ -509,61 +473,14 @@ class Session {
}
const toolResult: ToolResult = await invocation.execute(abortSignal);
const content = toToolCallContent(toolResult);
// Clean up event listeners
subAgentToolEventListeners.forEach((cleanup) => cleanup());
// Handle TodoWriteTool: extract todos and send plan update
if (isTodoWriteTool) {
// Extract todos from args (initial state)
let todos: Array<{
id: string;
content: string;
status: 'pending' | 'in_progress' | 'completed';
}> = [];
if (Array.isArray(args['todos'])) {
todos = args['todos'] as Array<{
id: string;
content: string;
status: 'pending' | 'in_progress' | 'completed';
}>;
}
// If returnDisplay has todos (e.g., modified by user), use those instead
if (
toolResult.returnDisplay &&
typeof toolResult.returnDisplay === 'object' &&
'type' in toolResult.returnDisplay &&
toolResult.returnDisplay.type === 'todo_list' &&
'todos' in toolResult.returnDisplay &&
Array.isArray(toolResult.returnDisplay.todos)
) {
todos = toolResult.returnDisplay.todos;
}
// Convert todos to plan entries and send plan update
if (todos.length > 0 || Array.isArray(args['todos'])) {
const planEntries = convertTodosToPlanEntries(todos);
await this.sendUpdate({
sessionUpdate: 'plan',
entries: planEntries,
});
}
// Skip tool_call_update event for TodoWriteTool
// Still log and return function response for LLM
} else {
// Normal tool handling: send tool_call_update
const content = toToolCallContent(toolResult);
await this.sendUpdate({
sessionUpdate: 'tool_call_update',
toolCallId: callId,
status: 'completed',
content: content ? [content] : [],
});
}
await this.sendUpdate({
sessionUpdate: 'tool_call_update',
toolCallId: callId,
status: 'completed',
content: content ? [content] : [],
});
const durationMs = Date.now() - startTime;
logToolCall(this.config, {
@@ -583,9 +500,6 @@ class Session {
return convertToFunctionResponse(fc.name, callId, toolResult.llmContent);
} catch (e) {
// Ensure cleanup on error
subAgentToolEventListeners.forEach((cleanup) => cleanup());
const error = e instanceof Error ? e : new Error(String(e));
await this.sendUpdate({
@@ -601,300 +515,6 @@ class Session {
}
}
/**
* Sets up event listeners to track sub-agent tool calls within a TaskTool execution.
* Converts subagent tool call events into zedIntegration session updates.
*
* @param eventEmitter - The SubAgentEventEmitter from TaskTool
* @param abortSignal - Signal to abort tracking if parent is cancelled
* @returns Array of cleanup functions to remove event listeners
*/
private setupSubAgentToolTracking(
eventEmitter: SubAgentEventEmitter,
abortSignal: AbortSignal,
): Array<() => void> {
const cleanupFunctions: Array<() => void> = [];
const toolRegistry = this.config.getToolRegistry();
// Track subagent tool call states
const subAgentToolStates = new Map<
string,
{
tool?: AnyDeclarativeTool;
invocation?: AnyToolInvocation;
args?: Record<string, unknown>;
}
>();
// Listen for tool call start
const onToolCall = (...args: unknown[]) => {
const event = args[0] as SubAgentToolCallEvent;
if (abortSignal.aborted) return;
const subAgentTool = toolRegistry.getTool(event.name);
let subAgentInvocation: AnyToolInvocation | undefined;
let toolKind: acp.ToolKind = 'other';
let locations: acp.ToolCallLocation[] = [];
if (subAgentTool) {
try {
subAgentInvocation = subAgentTool.build(event.args);
toolKind = this.mapToolKind(subAgentTool.kind);
locations = subAgentInvocation.toolLocations().map((loc) => ({
path: loc.path,
line: loc.line ?? null,
}));
} catch (e) {
// If building fails, continue with defaults
console.warn(`Failed to build subagent tool ${event.name}:`, e);
}
}
// Save state for subsequent updates
subAgentToolStates.set(event.callId, {
tool: subAgentTool,
invocation: subAgentInvocation,
args: event.args,
});
// Check if this is TodoWriteTool - if so, skip sending tool_call event
// Plan update will be sent in onToolResult when we have the final state
if (event.name === TodoWriteTool.Name) {
return;
}
// Send tool call start update with rawInput
void this.sendUpdate({
sessionUpdate: 'tool_call',
toolCallId: event.callId,
status: 'in_progress',
title: event.description || event.name,
content: [],
locations,
kind: toolKind,
rawInput: event.args,
});
};
// Listen for tool call result
const onToolResult = (...args: unknown[]) => {
const event = args[0] as SubAgentToolResultEvent;
if (abortSignal.aborted) return;
const state = subAgentToolStates.get(event.callId);
// Check if this is TodoWriteTool - if so, route to plan updates
if (event.name === TodoWriteTool.Name) {
let todos:
| Array<{
id: string;
content: string;
status: 'pending' | 'in_progress' | 'completed';
}>
| undefined;
// Try to extract todos from resultDisplay first (final state)
if (event.resultDisplay) {
try {
// resultDisplay might be a JSON stringified object
const parsed =
typeof event.resultDisplay === 'string'
? JSON.parse(event.resultDisplay)
: event.resultDisplay;
if (
typeof parsed === 'object' &&
parsed !== null &&
'type' in parsed &&
parsed.type === 'todo_list' &&
'todos' in parsed &&
Array.isArray(parsed.todos)
) {
todos = parsed.todos;
}
} catch {
// If parsing fails, ignore - resultDisplay might not be JSON
}
}
// Fallback to args if resultDisplay doesn't have todos
if (!todos && state?.args && Array.isArray(state.args['todos'])) {
todos = state.args['todos'] as Array<{
id: string;
content: string;
status: 'pending' | 'in_progress' | 'completed';
}>;
}
// Send plan update if we have todos
if (todos) {
const planEntries = convertTodosToPlanEntries(todos);
void this.sendUpdate({
sessionUpdate: 'plan',
entries: planEntries,
});
}
// Skip sending tool_call_update event for TodoWriteTool
// Clean up state
subAgentToolStates.delete(event.callId);
return;
}
let content: acp.ToolCallContent[] = [];
// If there's a result display, try to convert to ToolCallContent
if (event.resultDisplay && state?.invocation) {
// resultDisplay is typically a string
if (typeof event.resultDisplay === 'string') {
content = [
{
type: 'content',
content: {
type: 'text',
text: event.resultDisplay,
},
},
];
}
}
// Send tool call completion update
void this.sendUpdate({
sessionUpdate: 'tool_call_update',
toolCallId: event.callId,
status: event.success ? 'completed' : 'failed',
content: content.length > 0 ? content : [],
title: state?.invocation?.getDescription() ?? event.name,
kind: state?.tool ? this.mapToolKind(state.tool.kind) : null,
locations:
state?.invocation?.toolLocations().map((loc) => ({
path: loc.path,
line: loc.line ?? null,
})) ?? null,
rawInput: state?.args,
});
// Clean up state
subAgentToolStates.delete(event.callId);
};
// Listen for permission requests
const onToolWaitingApproval = async (...args: unknown[]) => {
const event = args[0] as SubAgentApprovalRequestEvent;
if (abortSignal.aborted) return;
const state = subAgentToolStates.get(event.callId);
const content: acp.ToolCallContent[] = [];
// Handle different confirmation types
if (event.confirmationDetails.type === 'edit') {
const editDetails = event.confirmationDetails as unknown as {
type: 'edit';
fileName: string;
originalContent: string | null;
newContent: string;
};
content.push({
type: 'diff',
path: editDetails.fileName,
oldText: editDetails.originalContent ?? '',
newText: editDetails.newContent,
});
}
// Build permission request options from confirmation details
// event.confirmationDetails already contains all fields except onConfirm,
// which we add here to satisfy the type requirement for toPermissionOptions
const fullConfirmationDetails = {
...event.confirmationDetails,
onConfirm: async () => {
// This is a placeholder - the actual response is handled via event.respond
},
} as unknown as ToolCallConfirmationDetails;
const params: acp.RequestPermissionRequest = {
sessionId: this.id,
options: toPermissionOptions(fullConfirmationDetails),
toolCall: {
toolCallId: event.callId,
status: 'pending',
title: event.description || event.name,
content,
locations:
state?.invocation?.toolLocations().map((loc) => ({
path: loc.path,
line: loc.line ?? null,
})) ?? [],
kind: state?.tool ? this.mapToolKind(state.tool.kind) : 'other',
rawInput: state?.args,
},
};
try {
// Request permission from zed client
const output = await this.client.requestPermission(params);
const outcome =
output.outcome.outcome === 'cancelled'
? ToolConfirmationOutcome.Cancel
: z
.nativeEnum(ToolConfirmationOutcome)
.parse(output.outcome.optionId);
// Respond to subagent with the outcome
await event.respond(outcome);
} catch (error) {
// If permission request fails, cancel the tool call
console.error(
`Permission request failed for subagent tool ${event.name}:`,
error,
);
await event.respond(ToolConfirmationOutcome.Cancel);
}
};
// Register event listeners
eventEmitter.on(SubAgentEventType.TOOL_CALL, onToolCall);
eventEmitter.on(SubAgentEventType.TOOL_RESULT, onToolResult);
eventEmitter.on(
SubAgentEventType.TOOL_WAITING_APPROVAL,
onToolWaitingApproval,
);
// Return cleanup functions
cleanupFunctions.push(() => {
eventEmitter.off(SubAgentEventType.TOOL_CALL, onToolCall);
eventEmitter.off(SubAgentEventType.TOOL_RESULT, onToolResult);
eventEmitter.off(
SubAgentEventType.TOOL_WAITING_APPROVAL,
onToolWaitingApproval,
);
});
return cleanupFunctions;
}
/**
* Maps core Tool Kind enum to ACP ToolKind string literals.
*
* @param kind - The core Kind enum value
* @returns The corresponding ACP ToolKind string literal
*/
private mapToolKind(kind: Kind): acp.ToolKind {
const kindMap: Record<Kind, acp.ToolKind> = {
[Kind.Read]: 'read',
[Kind.Edit]: 'edit',
[Kind.Delete]: 'delete',
[Kind.Move]: 'move',
[Kind.Search]: 'search',
[Kind.Execute]: 'execute',
[Kind.Think]: 'think',
[Kind.Fetch]: 'fetch',
[Kind.Other]: 'other',
};
return kindMap[kind] ?? 'other';
}
async #resolvePrompt(
message: acp.ContentBlock[],
abortSignal: AbortSignal,
@@ -1239,27 +859,6 @@ class Session {
}
}
/**
* Converts todo items to plan entries format for zed integration.
* Maps todo status to plan status and assigns a default priority.
*
* @param todos - Array of todo items with id, content, and status
* @returns Array of plan entries with content, priority, and status
*/
function convertTodosToPlanEntries(
todos: Array<{
id: string;
content: string;
status: 'pending' | 'in_progress' | 'completed';
}>,
): acp.PlanEntry[] {
return todos.map((todo) => ({
content: todo.content,
priority: 'medium' as const, // Default priority since todos don't have priority
status: todo.status,
}));
}
function toToolCallContent(toolResult: ToolResult): acp.ToolCallContent | null {
if (toolResult.error?.message) {
throw new Error(toolResult.error.message);
@@ -1271,6 +870,26 @@ function toToolCallContent(toolResult: ToolResult): acp.ToolCallContent | null {
type: 'content',
content: { type: 'text', text: toolResult.returnDisplay },
};
} else if (
'type' in toolResult.returnDisplay &&
toolResult.returnDisplay.type === 'todo_list'
) {
// Handle TodoResultDisplay - convert to text representation
const todoText = toolResult.returnDisplay.todos
.map((todo) => {
const statusIcon = {
pending: '○',
in_progress: '◐',
completed: '●',
}[todo.status];
return `${statusIcon} ${todo.content}`;
})
.join('\n');
return {
type: 'content',
content: { type: 'text', text: todoText },
};
} else if (
'type' in toolResult.returnDisplay &&
toolResult.returnDisplay.type === 'plan_summary'

View File

@@ -1,6 +1,6 @@
{
"name": "@qwen-code/qwen-code-core",
"version": "0.1.5",
"version": "0.1.2",
"description": "Qwen Code Core",
"repository": {
"type": "git",

View File

@@ -154,11 +154,6 @@ vi.mock('../core/tokenLimits.js', () => ({
describe('Server Config (config.ts)', () => {
const MODEL = 'qwen3-coder-plus';
// Default mock for canUseRipgrep to return true (tests that care about ripgrep will override this)
beforeEach(() => {
vi.mocked(canUseRipgrep).mockResolvedValue(true);
});
const SANDBOX: SandboxConfig = {
command: 'docker',
image: 'qwen-code-sandbox',
@@ -583,40 +578,6 @@ describe('Server Config (config.ts)', () => {
});
});
describe('UseBuiltinRipgrep Configuration', () => {
it('should default useBuiltinRipgrep to true when not provided', () => {
const config = new Config(baseParams);
expect(config.getUseBuiltinRipgrep()).toBe(true);
});
it('should set useBuiltinRipgrep to false when provided as false', () => {
const paramsWithBuiltinRipgrep: ConfigParameters = {
...baseParams,
useBuiltinRipgrep: false,
};
const config = new Config(paramsWithBuiltinRipgrep);
expect(config.getUseBuiltinRipgrep()).toBe(false);
});
it('should set useBuiltinRipgrep to true when explicitly provided as true', () => {
const paramsWithBuiltinRipgrep: ConfigParameters = {
...baseParams,
useBuiltinRipgrep: true,
};
const config = new Config(paramsWithBuiltinRipgrep);
expect(config.getUseBuiltinRipgrep()).toBe(true);
});
it('should default useBuiltinRipgrep to true when undefined', () => {
const paramsWithUndefinedBuiltinRipgrep: ConfigParameters = {
...baseParams,
useBuiltinRipgrep: undefined,
};
const config = new Config(paramsWithUndefinedBuiltinRipgrep);
expect(config.getUseBuiltinRipgrep()).toBe(true);
});
});
describe('createToolRegistry', () => {
it('should register a tool if coreTools contains an argument-specific pattern', async () => {
const params: ConfigParameters = {
@@ -864,60 +825,10 @@ describe('setApprovalMode with folder trust', () => {
expect(wasRipGrepRegistered).toBe(true);
expect(wasGrepRegistered).toBe(false);
expect(canUseRipgrep).toHaveBeenCalledWith(true);
expect(logRipgrepFallback).not.toHaveBeenCalled();
});
it('should register RipGrepTool with system ripgrep when useBuiltinRipgrep is false', async () => {
(canUseRipgrep as Mock).mockResolvedValue(true);
const config = new Config({
...baseParams,
useRipgrep: true,
useBuiltinRipgrep: false,
});
await config.initialize();
const calls = (ToolRegistry.prototype.registerTool as Mock).mock.calls;
const wasRipGrepRegistered = calls.some(
(call) => call[0] instanceof vi.mocked(RipGrepTool),
);
const wasGrepRegistered = calls.some(
(call) => call[0] instanceof vi.mocked(GrepTool),
);
expect(wasRipGrepRegistered).toBe(true);
expect(wasGrepRegistered).toBe(false);
expect(canUseRipgrep).toHaveBeenCalledWith(false);
});
it('should fall back to GrepTool and log error when useBuiltinRipgrep is false but system ripgrep is not available', async () => {
(canUseRipgrep as Mock).mockResolvedValue(false);
const config = new Config({
...baseParams,
useRipgrep: true,
useBuiltinRipgrep: false,
});
await config.initialize();
const calls = (ToolRegistry.prototype.registerTool as Mock).mock.calls;
const wasRipGrepRegistered = calls.some(
(call) => call[0] instanceof vi.mocked(RipGrepTool),
);
const wasGrepRegistered = calls.some(
(call) => call[0] instanceof vi.mocked(GrepTool),
);
expect(wasRipGrepRegistered).toBe(false);
expect(wasGrepRegistered).toBe(true);
expect(canUseRipgrep).toHaveBeenCalledWith(false);
expect(logRipgrepFallback).toHaveBeenCalledWith(
config,
expect.any(RipgrepFallbackEvent),
);
const event = (logRipgrepFallback as Mock).mock.calls[0][1];
expect(event.error).toContain('Ripgrep is not available');
});
it('should fall back to GrepTool and log error when useRipgrep is true and builtin ripgrep is not available', async () => {
it('should register GrepTool as a fallback when useRipgrep is true but it is not available', async () => {
(canUseRipgrep as Mock).mockResolvedValue(false);
const config = new Config({ ...baseParams, useRipgrep: true });
await config.initialize();
@@ -932,16 +843,15 @@ describe('setApprovalMode with folder trust', () => {
expect(wasRipGrepRegistered).toBe(false);
expect(wasGrepRegistered).toBe(true);
expect(canUseRipgrep).toHaveBeenCalledWith(true);
expect(logRipgrepFallback).toHaveBeenCalledWith(
config,
expect.any(RipgrepFallbackEvent),
);
const event = (logRipgrepFallback as Mock).mock.calls[0][1];
expect(event.error).toContain('Ripgrep is not available');
expect(event.error).toBeUndefined();
});
it('should fall back to GrepTool and log error when canUseRipgrep throws an error', async () => {
it('should register GrepTool as a fallback when canUseRipgrep throws an error', async () => {
const error = new Error('ripGrep check failed');
(canUseRipgrep as Mock).mockRejectedValue(error);
const config = new Config({ ...baseParams, useRipgrep: true });
@@ -980,6 +890,7 @@ describe('setApprovalMode with folder trust', () => {
expect(wasRipGrepRegistered).toBe(false);
expect(wasGrepRegistered).toBe(true);
expect(canUseRipgrep).not.toHaveBeenCalled();
expect(logRipgrepFallback).not.toHaveBeenCalled();
});
});
});

View File

@@ -57,7 +57,7 @@ import { TaskTool } from '../tools/task.js';
import { TodoWriteTool } from '../tools/todoWrite.js';
import { ToolRegistry } from '../tools/tool-registry.js';
import { WebFetchTool } from '../tools/web-fetch.js';
import { WebSearchTool } from '../tools/web-search/index.js';
import { WebSearchTool } from '../tools/web-search.js';
import { WriteFileTool } from '../tools/write-file.js';
// Other modules
@@ -262,19 +262,11 @@ export interface ConfigParameters {
cliVersion?: string;
loadMemoryFromIncludeDirectories?: boolean;
// Web search providers
webSearch?: {
provider: Array<{
type: 'tavily' | 'google' | 'dashscope';
apiKey?: string;
searchEngineId?: string;
}>;
default: string;
};
tavilyApiKey?: string;
chatCompression?: ChatCompressionSettings;
interactive?: boolean;
trustedFolder?: boolean;
useRipgrep?: boolean;
useBuiltinRipgrep?: boolean;
shouldUseNodePtyShell?: boolean;
skipNextSpeakerCheck?: boolean;
shellExecutionConfig?: ShellExecutionConfig;
@@ -358,19 +350,11 @@ export class Config {
private readonly cliVersion?: string;
private readonly experimentalZedIntegration: boolean = false;
private readonly loadMemoryFromIncludeDirectories: boolean = false;
private readonly webSearch?: {
provider: Array<{
type: 'tavily' | 'google' | 'dashscope';
apiKey?: string;
searchEngineId?: string;
}>;
default: string;
};
private readonly tavilyApiKey?: string;
private readonly chatCompression: ChatCompressionSettings | undefined;
private readonly interactive: boolean;
private readonly trustedFolder: boolean | undefined;
private readonly useRipgrep: boolean;
private readonly useBuiltinRipgrep: boolean;
private readonly shouldUseNodePtyShell: boolean;
private readonly skipNextSpeakerCheck: boolean;
private shellExecutionConfig: ShellExecutionConfig;
@@ -468,12 +452,13 @@ export class Config {
this.chatCompression = params.chatCompression;
this.interactive = params.interactive ?? false;
this.trustedFolder = params.trustedFolder;
this.shouldUseNodePtyShell = params.shouldUseNodePtyShell ?? false;
this.skipNextSpeakerCheck = params.skipNextSpeakerCheck ?? false;
this.skipLoopDetection = params.skipLoopDetection ?? false;
// Web search
this.webSearch = params.webSearch;
this.tavilyApiKey = params.tavilyApiKey;
this.useRipgrep = params.useRipgrep ?? true;
this.useBuiltinRipgrep = params.useBuiltinRipgrep ?? true;
this.shouldUseNodePtyShell = params.shouldUseNodePtyShell ?? false;
this.skipNextSpeakerCheck = params.skipNextSpeakerCheck ?? true;
this.shellExecutionConfig = {
@@ -926,8 +911,8 @@ export class Config {
}
// Web search provider configuration
getWebSearchConfig() {
return this.webSearch;
getTavilyApiKey(): string | undefined {
return this.tavilyApiKey;
}
getIdeMode(): boolean {
@@ -1003,10 +988,6 @@ export class Config {
return this.useRipgrep;
}
getUseBuiltinRipgrep(): boolean {
return this.useBuiltinRipgrep;
}
getShouldUseNodePtyShell(): boolean {
return this.shouldUseNodePtyShell;
}
@@ -1134,18 +1115,13 @@ export class Config {
let useRipgrep = false;
let errorString: undefined | string = undefined;
try {
useRipgrep = await canUseRipgrep(this.getUseBuiltinRipgrep());
useRipgrep = await canUseRipgrep();
} catch (error: unknown) {
errorString = String(error);
}
if (useRipgrep) {
registerCoreTool(RipGrepTool, this);
} else {
errorString =
errorString ||
'Ripgrep is not available. Please install ripgrep globally.';
// Log for telemetry
logRipgrepFallback(this, new RipgrepFallbackEvent(errorString));
registerCoreTool(GrepTool, this);
}
@@ -1166,10 +1142,8 @@ export class Config {
registerCoreTool(TodoWriteTool, this);
registerCoreTool(ExitPlanModeTool, this);
registerCoreTool(WebFetchTool, this);
// Conditionally register web search tool if web search provider is configured
// buildWebSearchConfig ensures qwen-oauth users get dashscope provider, so
// if tool is registered, config must exist
if (this.getWebSearchConfig()) {
// Conditionally register web search tool only if Tavily API key is set
if (this.getTavilyApiKey()) {
registerCoreTool(WebSearchTool, this);
}

View File

@@ -69,7 +69,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
@@ -288,7 +288,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
@@ -517,7 +517,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
@@ -731,7 +731,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
@@ -945,7 +945,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
@@ -1159,7 +1159,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
@@ -1373,7 +1373,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
@@ -1587,7 +1587,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
@@ -1801,7 +1801,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
@@ -2015,7 +2015,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
@@ -2252,7 +2252,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
@@ -2549,7 +2549,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
@@ -2786,7 +2786,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
@@ -3079,7 +3079,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
@@ -3293,7 +3293,7 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
## Software Engineering Tasks
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the 'todo_write' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'grep_search', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use 'search_file_content', 'glob', 'read_file', and 'read_many_files' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., 'edit', 'write_file' 'run_shell_command' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.

View File

@@ -21,9 +21,6 @@ vi.mock('../../telemetry/loggers.js', () => ({
}));
vi.mock('../../utils/openaiLogger.js', () => ({
OpenAILogger: vi.fn().mockImplementation(() => ({
logInteraction: vi.fn(),
})),
openaiLogger: {
logInteraction: vi.fn(),
},

View File

@@ -16,11 +16,11 @@ import {
import type { Content, GenerateContentResponse, Part } from '@google/genai';
import {
findCompressSplitPoint,
isThinkingDefault,
isThinkingSupported,
GeminiClient,
} from './client.js';
import { findCompressSplitPoint } from '../services/chatCompressionService.js';
import {
AuthType,
type ContentGenerator,
@@ -42,6 +42,7 @@ import { setSimulate429 } from '../utils/testUtils.js';
import { tokenLimit } from './tokenLimits.js';
import { ideContextStore } from '../ide/ideContext.js';
import { uiTelemetryService } from '../telemetry/uiTelemetry.js';
import { QwenLogger } from '../telemetry/index.js';
// Mock fs module to prevent actual file system operations during tests
const mockFileSystem = new Map<string, string>();
@@ -100,22 +101,6 @@ vi.mock('../utils/errorReporting', () => ({ reportError: vi.fn() }));
vi.mock('../utils/nextSpeakerChecker', () => ({
checkNextSpeaker: vi.fn().mockResolvedValue(null),
}));
vi.mock('../utils/environmentContext', () => ({
getEnvironmentContext: vi
.fn()
.mockResolvedValue([{ text: 'Mocked env context' }]),
getInitialChatHistory: vi.fn(async (_config, extraHistory) => [
{
role: 'user',
parts: [{ text: 'Mocked env context' }],
},
{
role: 'model',
parts: [{ text: 'Got it. Thanks for the context!' }],
},
...(extraHistory ?? []),
]),
}));
vi.mock('../utils/generateContentResponseUtilities', () => ({
getResponseText: (result: GenerateContentResponse) =>
result.candidates?.[0]?.content?.parts?.map((part) => part.text).join('') ||
@@ -151,10 +136,6 @@ vi.mock('../ide/ideContext.js');
vi.mock('../telemetry/uiTelemetry.js', () => ({
uiTelemetryService: mockUiTelemetryService,
}));
vi.mock('../telemetry/loggers.js', () => ({
logChatCompression: vi.fn(),
logNextSpeakerCheck: vi.fn(),
}));
/**
* Array.fromAsync ponyfill, which will be available in es 2024.
@@ -638,8 +619,7 @@ describe('Gemini Client (client.ts)', () => {
});
it('logs a telemetry event when compressing', async () => {
const { logChatCompression } = await import('../telemetry/loggers.js');
vi.mocked(logChatCompression).mockClear();
vi.spyOn(QwenLogger.prototype, 'logChatCompressionEvent');
const MOCKED_TOKEN_LIMIT = 1000;
const MOCKED_CONTEXT_PERCENTAGE_THRESHOLD = 0.5;
@@ -647,37 +627,19 @@ describe('Gemini Client (client.ts)', () => {
vi.spyOn(client['config'], 'getChatCompression').mockReturnValue({
contextPercentageThreshold: MOCKED_CONTEXT_PERCENTAGE_THRESHOLD,
});
// Need multiple history items so there's something to compress
const history = [
{ role: 'user', parts: [{ text: '...history 1...' }] },
{ role: 'model', parts: [{ text: '...history 2...' }] },
{ role: 'user', parts: [{ text: '...history 3...' }] },
{ role: 'model', parts: [{ text: '...history 4...' }] },
];
const history = [{ role: 'user', parts: [{ text: '...history...' }] }];
mockGetHistory.mockReturnValue(history);
// Token count needs to be ABOVE the threshold to trigger compression
const originalTokenCount =
MOCKED_TOKEN_LIMIT * MOCKED_CONTEXT_PERCENTAGE_THRESHOLD + 1;
MOCKED_TOKEN_LIMIT * MOCKED_CONTEXT_PERCENTAGE_THRESHOLD;
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(
originalTokenCount,
);
// Mock the summary response from the chat
// We need to control the estimated new token count.
// We mock startChat to return a chat with a known history.
const summaryText = 'This is a summary.';
mockGenerateContentFn.mockResolvedValue({
candidates: [
{
content: {
role: 'model',
parts: [{ text: summaryText }],
},
},
],
} as unknown as GenerateContentResponse);
// Mock startChat to complete the compression flow
const splitPoint = findCompressSplitPoint(history, 0.7);
const historyToKeep = history.slice(splitPoint);
const newCompressedHistory: Content[] = [
@@ -697,36 +659,52 @@ describe('Gemini Client (client.ts)', () => {
.fn()
.mockResolvedValue(mockNewChat as GeminiChat);
const totalChars = newCompressedHistory.reduce(
(total, content) => total + JSON.stringify(content).length,
0,
);
const newTokenCount = Math.floor(totalChars / 4);
// Mock the summary response from the chat
mockGenerateContentFn.mockResolvedValue({
candidates: [
{
content: {
role: 'model',
parts: [{ text: summaryText }],
},
},
],
} as unknown as GenerateContentResponse);
await client.tryCompressChat('prompt-id-3', false);
expect(logChatCompression).toHaveBeenCalledWith(
expect.anything(),
expect(QwenLogger.prototype.logChatCompressionEvent).toHaveBeenCalledWith(
expect.objectContaining({
tokens_before: originalTokenCount,
tokens_after: newTokenCount,
}),
);
expect(uiTelemetryService.setLastPromptTokenCount).toHaveBeenCalled();
expect(uiTelemetryService.setLastPromptTokenCount).toHaveBeenCalledWith(
newTokenCount,
);
expect(uiTelemetryService.setLastPromptTokenCount).toHaveBeenCalledTimes(
1,
);
});
it('should trigger summarization if token count is above threshold with contextPercentageThreshold setting', async () => {
it('should trigger summarization if token count is at threshold with contextPercentageThreshold setting', async () => {
const MOCKED_TOKEN_LIMIT = 1000;
const MOCKED_CONTEXT_PERCENTAGE_THRESHOLD = 0.5;
vi.mocked(tokenLimit).mockReturnValue(MOCKED_TOKEN_LIMIT);
vi.spyOn(client['config'], 'getChatCompression').mockReturnValue({
contextPercentageThreshold: MOCKED_CONTEXT_PERCENTAGE_THRESHOLD,
});
// Need multiple history items so there's something to compress
const history = [
{ role: 'user', parts: [{ text: '...history 1...' }] },
{ role: 'model', parts: [{ text: '...history 2...' }] },
{ role: 'user', parts: [{ text: '...history 3...' }] },
{ role: 'model', parts: [{ text: '...history 4...' }] },
];
const history = [{ role: 'user', parts: [{ text: '...history...' }] }];
mockGetHistory.mockReturnValue(history);
// Token count needs to be ABOVE the threshold to trigger compression
const originalTokenCount =
MOCKED_TOKEN_LIMIT * MOCKED_CONTEXT_PERCENTAGE_THRESHOLD + 1;
MOCKED_TOKEN_LIMIT * MOCKED_CONTEXT_PERCENTAGE_THRESHOLD;
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(
originalTokenCount,
@@ -886,13 +864,7 @@ describe('Gemini Client (client.ts)', () => {
});
it('should always trigger summarization when force is true, regardless of token count', async () => {
// Need multiple history items so there's something to compress
const history = [
{ role: 'user', parts: [{ text: '...history 1...' }] },
{ role: 'model', parts: [{ text: '...history 2...' }] },
{ role: 'user', parts: [{ text: '...history 3...' }] },
{ role: 'model', parts: [{ text: '...history 4...' }] },
];
const history = [{ role: 'user', parts: [{ text: '...history...' }] }];
mockGetHistory.mockReturnValue(history);
const originalTokenCount = 100; // Well below threshold, but > estimated new count

View File

@@ -25,11 +25,13 @@ import {
import type { ContentGenerator } from './contentGenerator.js';
import { GeminiChat } from './geminiChat.js';
import {
getCompressionPrompt,
getCoreSystemPrompt,
getCustomSystemPrompt,
getPlanModeSystemReminder,
getSubagentSystemReminder,
} from './prompts.js';
import { tokenLimit } from './tokenLimits.js';
import {
CompressionStatus,
GeminiEventType,
@@ -40,11 +42,6 @@ import {
// Services
import { type ChatRecordingService } from '../services/chatRecordingService.js';
import {
ChatCompressionService,
COMPRESSION_PRESERVE_THRESHOLD,
COMPRESSION_TOKEN_THRESHOLD,
} from '../services/chatCompressionService.js';
import { LoopDetectionService } from '../services/loopDetectionService.js';
// Tools
@@ -53,18 +50,21 @@ import { TaskTool } from '../tools/task.js';
// Telemetry
import {
NextSpeakerCheckEvent,
logChatCompression,
logNextSpeakerCheck,
makeChatCompressionEvent,
uiTelemetryService,
} from '../telemetry/index.js';
// Utilities
import {
getDirectoryContextString,
getInitialChatHistory,
getEnvironmentContext,
} from '../utils/environmentContext.js';
import { reportError } from '../utils/errorReporting.js';
import { getErrorMessage } from '../utils/errors.js';
import { checkNextSpeaker } from '../utils/nextSpeakerChecker.js';
import { flatMapTextParts } from '../utils/partUtils.js';
import { flatMapTextParts, getResponseText } from '../utils/partUtils.js';
import { retryWithBackoff } from '../utils/retry.js';
// IDE integration
@@ -85,8 +85,68 @@ export function isThinkingDefault(model: string) {
return model.startsWith('gemini-2.5') || model === DEFAULT_GEMINI_MODEL_AUTO;
}
/**
* Returns the index of the oldest item to keep when compressing. May return
* contents.length which indicates that everything should be compressed.
*
* Exported for testing purposes.
*/
export function findCompressSplitPoint(
contents: Content[],
fraction: number,
): number {
if (fraction <= 0 || fraction >= 1) {
throw new Error('Fraction must be between 0 and 1');
}
const charCounts = contents.map((content) => JSON.stringify(content).length);
const totalCharCount = charCounts.reduce((a, b) => a + b, 0);
const targetCharCount = totalCharCount * fraction;
let lastSplitPoint = 0; // 0 is always valid (compress nothing)
let cumulativeCharCount = 0;
for (let i = 0; i < contents.length; i++) {
const content = contents[i];
if (
content.role === 'user' &&
!content.parts?.some((part) => !!part.functionResponse)
) {
if (cumulativeCharCount >= targetCharCount) {
return i;
}
lastSplitPoint = i;
}
cumulativeCharCount += charCounts[i];
}
// We found no split points after targetCharCount.
// Check if it's safe to compress everything.
const lastContent = contents[contents.length - 1];
if (
lastContent?.role === 'model' &&
!lastContent?.parts?.some((part) => part.functionCall)
) {
return contents.length;
}
// Can't compress everything so just compress at last splitpoint.
return lastSplitPoint;
}
const MAX_TURNS = 100;
/**
* Threshold for compression token count as a fraction of the model's token limit.
* If the chat history exceeds this threshold, it will be compressed.
*/
const COMPRESSION_TOKEN_THRESHOLD = 0.7;
/**
* The fraction of the latest chat history to keep. A value of 0.3
* means that only the last 30% of the chat history will be kept after compression.
*/
const COMPRESSION_PRESERVE_THRESHOLD = 0.3;
export class GeminiClient {
private chat?: GeminiChat;
private readonly generateContentConfig: GenerateContentConfig = {
@@ -183,13 +243,23 @@ export class GeminiClient {
async startChat(extraHistory?: Content[]): Promise<GeminiChat> {
this.forceFullIdeContext = true;
this.hasFailedCompressionAttempt = false;
const envParts = await getEnvironmentContext(this.config);
const toolRegistry = this.config.getToolRegistry();
const toolDeclarations = toolRegistry.getFunctionDeclarations();
const tools: Tool[] = [{ functionDeclarations: toolDeclarations }];
const history = await getInitialChatHistory(this.config, extraHistory);
const history: Content[] = [
{
role: 'user',
parts: envParts,
},
{
role: 'model',
parts: [{ text: 'Got it. Thanks for the context!' }],
},
...(extraHistory ?? []),
];
try {
const userMemory = this.config.getUserMemory();
const model = this.config.getModel();
@@ -433,15 +503,14 @@ export class GeminiClient {
userMemory,
this.config.getModel(),
);
const initialHistory = await getInitialChatHistory(this.config);
const environment = await getEnvironmentContext(this.config);
// Create a mock request content to count total tokens
const mockRequestContent = [
{
role: 'system' as const,
parts: [{ text: systemPrompt }],
parts: [{ text: systemPrompt }, ...environment],
},
...initialHistory,
...currentHistory,
];
@@ -663,37 +732,127 @@ export class GeminiClient {
prompt_id: string,
force: boolean = false,
): Promise<ChatCompressionInfo> {
const compressionService = new ChatCompressionService();
const model = this.config.getModel();
const { newHistory, info } = await compressionService.compress(
this.getChat(),
prompt_id,
force,
this.config.getModel(),
this.config,
this.hasFailedCompressionAttempt,
);
const curatedHistory = this.getChat().getHistory(true);
// Handle compression result
if (info.compressionStatus === CompressionStatus.COMPRESSED) {
// Success: update chat with new compressed history
if (newHistory) {
this.chat = await this.startChat(newHistory);
this.forceFullIdeContext = true;
}
} else if (
info.compressionStatus ===
CompressionStatus.COMPRESSION_FAILED_INFLATED_TOKEN_COUNT ||
info.compressionStatus ===
CompressionStatus.COMPRESSION_FAILED_EMPTY_SUMMARY
// Regardless of `force`, don't do anything if the history is empty.
if (
curatedHistory.length === 0 ||
(this.hasFailedCompressionAttempt && !force)
) {
// Track failed attempts (only mark as failed if not forced)
if (!force) {
this.hasFailedCompressionAttempt = true;
return {
originalTokenCount: 0,
newTokenCount: 0,
compressionStatus: CompressionStatus.NOOP,
};
}
const originalTokenCount = uiTelemetryService.getLastPromptTokenCount();
const contextPercentageThreshold =
this.config.getChatCompression()?.contextPercentageThreshold;
// Don't compress if not forced and we are under the limit.
if (!force) {
const threshold =
contextPercentageThreshold ?? COMPRESSION_TOKEN_THRESHOLD;
if (originalTokenCount < threshold * tokenLimit(model)) {
return {
originalTokenCount,
newTokenCount: originalTokenCount,
compressionStatus: CompressionStatus.NOOP,
};
}
}
return info;
const splitPoint = findCompressSplitPoint(
curatedHistory,
1 - COMPRESSION_PRESERVE_THRESHOLD,
);
const historyToCompress = curatedHistory.slice(0, splitPoint);
const historyToKeep = curatedHistory.slice(splitPoint);
const summaryResponse = await this.config
.getContentGenerator()
.generateContent(
{
model,
contents: [
...historyToCompress,
{
role: 'user',
parts: [
{
text: 'First, reason in your scratchpad. Then, generate the <state_snapshot>.',
},
],
},
],
config: {
systemInstruction: { text: getCompressionPrompt() },
},
},
prompt_id,
);
const summary = getResponseText(summaryResponse) ?? '';
const chat = await this.startChat([
{
role: 'user',
parts: [{ text: summary }],
},
{
role: 'model',
parts: [{ text: 'Got it. Thanks for the additional context!' }],
},
...historyToKeep,
]);
this.forceFullIdeContext = true;
// Estimate token count 1 token ≈ 4 characters
const newTokenCount = Math.floor(
chat
.getHistory()
.reduce((total, content) => total + JSON.stringify(content).length, 0) /
4,
);
logChatCompression(
this.config,
makeChatCompressionEvent({
tokens_before: originalTokenCount,
tokens_after: newTokenCount,
}),
);
if (newTokenCount > originalTokenCount) {
this.hasFailedCompressionAttempt = !force && true;
return {
originalTokenCount,
newTokenCount,
compressionStatus:
CompressionStatus.COMPRESSION_FAILED_INFLATED_TOKEN_COUNT,
};
} else {
this.chat = chat; // Chat compression successful, set new state.
uiTelemetryService.setLastPromptTokenCount(newTokenCount);
}
logChatCompression(
this.config,
makeChatCompressionEvent({
tokens_before: originalTokenCount,
tokens_after: newTokenCount,
}),
);
return {
originalTokenCount,
newTokenCount,
compressionStatus: CompressionStatus.COMPRESSED,
};
}
}

View File

@@ -58,7 +58,6 @@ export type ContentGeneratorConfig = {
vertexai?: boolean;
authType?: AuthType | undefined;
enableOpenAILogging?: boolean;
openAILoggingDir?: string;
// Timeout configuration in milliseconds
timeout?: number;
// Maximum retries for failed requests

View File

@@ -32,7 +32,6 @@ export class OpenAIContentGenerator implements ContentGenerator {
telemetryService: new DefaultTelemetryService(
cliConfig,
contentGeneratorConfig.enableOpenAILogging,
contentGeneratorConfig.openAILoggingDir,
),
errorHandler: new EnhancedErrorHandler(
(error: unknown, request: GenerateContentParameters) =>

View File

@@ -7,7 +7,7 @@
import type { Config } from '../../config/config.js';
import { logApiError, logApiResponse } from '../../telemetry/loggers.js';
import { ApiErrorEvent, ApiResponseEvent } from '../../telemetry/types.js';
import { OpenAILogger } from '../../utils/openaiLogger.js';
import { openaiLogger } from '../../utils/openaiLogger.js';
import type { GenerateContentResponse } from '@google/genai';
import type OpenAI from 'openai';
@@ -43,17 +43,10 @@ export interface TelemetryService {
}
export class DefaultTelemetryService implements TelemetryService {
private logger: OpenAILogger;
constructor(
private config: Config,
private enableOpenAILogging: boolean = false,
openAILoggingDir?: string,
) {
// Always create a new logger instance to ensure correct working directory
// If no custom directory is provided, undefined will use the default path
this.logger = new OpenAILogger(openAILoggingDir);
}
) {}
async logSuccess(
context: RequestContext,
@@ -75,7 +68,7 @@ export class DefaultTelemetryService implements TelemetryService {
// Log interaction if enabled
if (this.enableOpenAILogging && openaiRequest && openaiResponse) {
await this.logger.logInteraction(openaiRequest, openaiResponse);
await openaiLogger.logInteraction(openaiRequest, openaiResponse);
}
}
@@ -104,7 +97,7 @@ export class DefaultTelemetryService implements TelemetryService {
// Log error interaction if enabled
if (this.enableOpenAILogging && openaiRequest) {
await this.logger.logInteraction(
await openaiLogger.logInteraction(
openaiRequest,
undefined,
error as Error,
@@ -144,7 +137,7 @@ export class DefaultTelemetryService implements TelemetryService {
openaiChunks.length > 0
) {
const combinedResponse = this.combineOpenAIChunksForLogging(openaiChunks);
await this.logger.logInteraction(openaiRequest, combinedResponse);
await openaiLogger.logInteraction(openaiRequest, combinedResponse);
}
}

View File

@@ -64,12 +64,6 @@ describe('normalize', () => {
expect(normalize('qwen-vl-max-latest')).toBe('qwen-vl-max-latest');
});
it('should preserve date suffixes for Kimi K2 models', () => {
expect(normalize('kimi-k2-0905-preview')).toBe('kimi-k2-0905');
expect(normalize('kimi-k2-0711-preview')).toBe('kimi-k2-0711');
expect(normalize('kimi-k2-turbo-preview')).toBe('kimi-k2-turbo');
});
it('should remove date like suffixes', () => {
expect(normalize('deepseek-r1-0528')).toBe('deepseek-r1');
});
@@ -219,7 +213,7 @@ describe('tokenLimit', () => {
});
});
describe('DeepSeek', () => {
describe('Other models', () => {
it('should return the correct limit for deepseek-r1', () => {
expect(tokenLimit('deepseek-r1')).toBe(131072);
});
@@ -232,27 +226,9 @@ describe('tokenLimit', () => {
it('should return the correct limit for deepseek-v3.2', () => {
expect(tokenLimit('deepseek-v3.2-exp')).toBe(131072);
});
});
describe('Moonshot Kimi', () => {
it('should return the correct limit for kimi-k2-0905-preview', () => {
expect(tokenLimit('kimi-k2-0905-preview')).toBe(262144); // 256K
expect(tokenLimit('kimi-k2-0905')).toBe(262144);
});
it('should return the correct limit for kimi-k2-turbo-preview', () => {
expect(tokenLimit('kimi-k2-turbo-preview')).toBe(262144); // 256K
expect(tokenLimit('kimi-k2-turbo')).toBe(262144);
});
it('should return the correct limit for kimi-k2-0711-preview', () => {
expect(tokenLimit('kimi-k2-0711-preview')).toBe(131072); // 128K
expect(tokenLimit('kimi-k2-0711')).toBe(131072);
});
it('should return the correct limit for kimi-k2-instruct', () => {
expect(tokenLimit('kimi-k2-instruct')).toBe(131072); // 128K
expect(tokenLimit('kimi-k2-instruct')).toBe(131072);
});
});
describe('Other models', () => {
it('should return the correct limit for gpt-oss', () => {
expect(tokenLimit('gpt-oss')).toBe(131072);
});

View File

@@ -47,13 +47,8 @@ export function normalize(model: string): string {
// remove trailing build / date / revision suffixes:
// - dates (e.g., -20250219), -v1, version numbers, 'latest', 'preview' etc.
s = s.replace(/-preview/g, '');
// Special handling for model names that include date/version as part of the model identifier
// - Qwen models: qwen-plus-latest, qwen-flash-latest, qwen-vl-max-latest
// - Kimi models: kimi-k2-0905, kimi-k2-0711, etc. (keep date for version distinction)
if (
!s.match(/^qwen-(?:plus|flash|vl-max)-latest$/) &&
!s.match(/^kimi-k2-\d{4}$/)
) {
// Special handling for Qwen model names that include "-latest" as part of the model name
if (!s.match(/^qwen-(?:plus|flash|vl-max)-latest$/)) {
// Regex breakdown:
// -(?:...)$ - Non-capturing group for suffixes at the end of the string
// The following patterns are matched within the group:
@@ -170,16 +165,9 @@ const PATTERNS: Array<[RegExp, TokenCount]> = [
[/^deepseek-v3(?:\.\d+)?(?:-.*)?$/, LIMITS['128k']],
// -------------------
// Moonshot / Kimi
// -------------------
[/^kimi-k2-0905$/, LIMITS['256k']], // Kimi-k2-0905-preview: 256K context
[/^kimi-k2-turbo.*$/, LIMITS['256k']], // Kimi-k2-turbo-preview: 256K context
[/^kimi-k2-0711$/, LIMITS['128k']], // Kimi-k2-0711-preview: 128K context
[/^kimi-k2-instruct.*$/, LIMITS['128k']], // Kimi-k2-instruct: 128K context
// -------------------
// GPT-OSS / Llama & Mistral examples
// GPT-OSS / Kimi / Llama & Mistral examples
// -------------------
[/^kimi-k2-instruct.*$/, LIMITS['128k']],
[/^gpt-oss.*$/, LIMITS['128k']],
[/^llama-4-scout.*$/, LIMITS['10m']],
[/^mistral-large-2.*$/, LIMITS['128k']],

View File

@@ -153,9 +153,6 @@ export enum CompressionStatus {
/** The compression failed due to an error counting tokens */
COMPRESSION_FAILED_TOKEN_COUNT_ERROR,
/** The compression failed due to receiving an empty or null summary */
COMPRESSION_FAILED_EMPTY_SUMMARY,
/** The compression was not necessary and no action was taken */
NOOP,
}

View File

@@ -113,7 +113,7 @@ describe('IdeClient', () => {
'utf8',
);
expect(StreamableHTTPClientTransport).toHaveBeenCalledWith(
new URL('http://127.0.0.1:8080/mcp'),
new URL('http://localhost:8080/mcp'),
expect.any(Object),
);
expect(mockClient.connect).toHaveBeenCalledWith(mockHttpTransport);
@@ -181,7 +181,7 @@ describe('IdeClient', () => {
await ideClient.connect();
expect(StreamableHTTPClientTransport).toHaveBeenCalledWith(
new URL('http://127.0.0.1:9090/mcp'),
new URL('http://localhost:9090/mcp'),
expect.any(Object),
);
expect(mockClient.connect).toHaveBeenCalledWith(mockHttpTransport);
@@ -230,7 +230,7 @@ describe('IdeClient', () => {
await ideClient.connect();
expect(StreamableHTTPClientTransport).toHaveBeenCalledWith(
new URL('http://127.0.0.1:8080/mcp'),
new URL('http://localhost:8080/mcp'),
expect.any(Object),
);
expect(ideClient.getConnectionStatus().status).toBe(
@@ -665,7 +665,7 @@ describe('IdeClient', () => {
await ideClient.connect();
expect(StreamableHTTPClientTransport).toHaveBeenCalledWith(
new URL('http://127.0.0.1:8080/mcp'),
new URL('http://localhost:8080/mcp'),
expect.objectContaining({
requestInit: {
headers: {

View File

@@ -667,10 +667,10 @@ export class IdeClient {
}
private createProxyAwareFetch() {
// ignore proxy for '127.0.0.1' by deafult to allow connecting to the ide mcp server
// ignore proxy for 'localhost' by deafult to allow connecting to the ide mcp server
const existingNoProxy = process.env['NO_PROXY'] || '';
const agent = new EnvHttpProxyAgent({
noProxy: [existingNoProxy, '127.0.0.1'].filter(Boolean).join(','),
noProxy: [existingNoProxy, 'localhost'].filter(Boolean).join(','),
});
const undiciPromise = import('undici');
return async (url: string | URL, init?: RequestInit): Promise<Response> => {
@@ -851,5 +851,5 @@ export class IdeClient {
function getIdeServerHost() {
const isInContainer =
fs.existsSync('/.dockerenv') || fs.existsSync('/run/.containerenv');
return isInContainer ? 'host.docker.internal' : '127.0.0.1';
return isInContainer ? 'host.docker.internal' : 'localhost';
}

View File

@@ -112,19 +112,14 @@ describe('ide-installer', () => {
platform: 'linux',
});
await installer.install();
// Note: The implementation uses process.platform, not the mocked platform
const isActuallyWindows = process.platform === 'win32';
const expectedCommand = isActuallyWindows ? '"code"' : 'code';
expect(child_process.spawnSync).toHaveBeenCalledWith(
expectedCommand,
'code',
[
'--install-extension',
'qwenlm.qwen-code-vscode-ide-companion',
'--force',
],
{ stdio: 'pipe', shell: isActuallyWindows },
{ stdio: 'pipe' },
);
});

View File

@@ -117,16 +117,15 @@ class VsCodeInstaller implements IdeInstaller {
};
}
const isWindows = process.platform === 'win32';
try {
const result = child_process.spawnSync(
isWindows ? `"${commandPath}"` : commandPath,
commandPath,
[
'--install-extension',
'qwenlm.qwen-code-vscode-ide-companion',
'--force',
],
{ stdio: 'pipe', shell: isWindows },
{ stdio: 'pipe' },
);
if (result.status !== 0) {

View File

@@ -48,7 +48,6 @@ export * from './utils/systemEncoding.js';
export * from './utils/textUtils.js';
export * from './utils/formatters.js';
export * from './utils/generateContentResponseUtilities.js';
export * from './utils/ripgrepUtils.js';
export * from './utils/filesearch/fileSearch.js';
export * from './utils/errorParsing.js';
export * from './utils/workspaceContext.js';
@@ -98,12 +97,10 @@ export * from './tools/write-file.js';
export * from './tools/web-fetch.js';
export * from './tools/memoryTool.js';
export * from './tools/shell.js';
export * from './tools/web-search/index.js';
export * from './tools/web-search.js';
export * from './tools/read-many-files.js';
export * from './tools/mcp-client.js';
export * from './tools/mcp-tool.js';
export * from './tools/task.js';
export * from './tools/todoWrite.js';
// MCP OAuth
export { MCPOAuthProvider } from './mcp/oauth-provider.js';

View File

@@ -1,372 +0,0 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import {
ChatCompressionService,
findCompressSplitPoint,
} from './chatCompressionService.js';
import type { Content, GenerateContentResponse } from '@google/genai';
import { CompressionStatus } from '../core/turn.js';
import { uiTelemetryService } from '../telemetry/uiTelemetry.js';
import { tokenLimit } from '../core/tokenLimits.js';
import type { GeminiChat } from '../core/geminiChat.js';
import type { Config } from '../config/config.js';
import { getInitialChatHistory } from '../utils/environmentContext.js';
import type { ContentGenerator } from '../core/contentGenerator.js';
vi.mock('../telemetry/uiTelemetry.js');
vi.mock('../core/tokenLimits.js');
vi.mock('../telemetry/loggers.js');
vi.mock('../utils/environmentContext.js');
describe('findCompressSplitPoint', () => {
it('should throw an error for non-positive numbers', () => {
expect(() => findCompressSplitPoint([], 0)).toThrow(
'Fraction must be between 0 and 1',
);
});
it('should throw an error for a fraction greater than or equal to 1', () => {
expect(() => findCompressSplitPoint([], 1)).toThrow(
'Fraction must be between 0 and 1',
);
});
it('should handle an empty history', () => {
expect(findCompressSplitPoint([], 0.5)).toBe(0);
});
it('should handle a fraction in the middle', () => {
const history: Content[] = [
{ role: 'user', parts: [{ text: 'This is the first message.' }] }, // JSON length: 66 (19%)
{ role: 'model', parts: [{ text: 'This is the second message.' }] }, // JSON length: 68 (40%)
{ role: 'user', parts: [{ text: 'This is the third message.' }] }, // JSON length: 66 (60%)
{ role: 'model', parts: [{ text: 'This is the fourth message.' }] }, // JSON length: 68 (80%)
{ role: 'user', parts: [{ text: 'This is the fifth message.' }] }, // JSON length: 65 (100%)
];
expect(findCompressSplitPoint(history, 0.5)).toBe(4);
});
it('should handle a fraction of last index', () => {
const history: Content[] = [
{ role: 'user', parts: [{ text: 'This is the first message.' }] }, // JSON length: 66 (19%)
{ role: 'model', parts: [{ text: 'This is the second message.' }] }, // JSON length: 68 (40%)
{ role: 'user', parts: [{ text: 'This is the third message.' }] }, // JSON length: 66 (60%)
{ role: 'model', parts: [{ text: 'This is the fourth message.' }] }, // JSON length: 68 (80%)
{ role: 'user', parts: [{ text: 'This is the fifth message.' }] }, // JSON length: 65 (100%)
];
expect(findCompressSplitPoint(history, 0.9)).toBe(4);
});
it('should handle a fraction of after last index', () => {
const history: Content[] = [
{ role: 'user', parts: [{ text: 'This is the first message.' }] }, // JSON length: 66 (24%)
{ role: 'model', parts: [{ text: 'This is the second message.' }] }, // JSON length: 68 (50%)
{ role: 'user', parts: [{ text: 'This is the third message.' }] }, // JSON length: 66 (74%)
{ role: 'model', parts: [{ text: 'This is the fourth message.' }] }, // JSON length: 68 (100%)
];
expect(findCompressSplitPoint(history, 0.8)).toBe(4);
});
it('should return earlier splitpoint if no valid ones are after threshhold', () => {
const history: Content[] = [
{ role: 'user', parts: [{ text: 'This is the first message.' }] },
{ role: 'model', parts: [{ text: 'This is the second message.' }] },
{ role: 'user', parts: [{ text: 'This is the third message.' }] },
{ role: 'model', parts: [{ functionCall: { name: 'foo', args: {} } }] },
];
// Can't return 4 because the previous item has a function call.
expect(findCompressSplitPoint(history, 0.99)).toBe(2);
});
it('should handle a history with only one item', () => {
const historyWithEmptyParts: Content[] = [
{ role: 'user', parts: [{ text: 'Message 1' }] },
];
expect(findCompressSplitPoint(historyWithEmptyParts, 0.5)).toBe(0);
});
it('should handle history with weird parts', () => {
const historyWithEmptyParts: Content[] = [
{ role: 'user', parts: [{ text: 'Message 1' }] },
{
role: 'model',
parts: [{ fileData: { fileUri: 'derp', mimeType: 'text/plain' } }],
},
{ role: 'user', parts: [{ text: 'Message 2' }] },
];
expect(findCompressSplitPoint(historyWithEmptyParts, 0.5)).toBe(2);
});
});
describe('ChatCompressionService', () => {
let service: ChatCompressionService;
let mockChat: GeminiChat;
let mockConfig: Config;
const mockModel = 'gemini-pro';
const mockPromptId = 'test-prompt-id';
beforeEach(() => {
service = new ChatCompressionService();
mockChat = {
getHistory: vi.fn(),
} as unknown as GeminiChat;
mockConfig = {
getChatCompression: vi.fn(),
getContentGenerator: vi.fn(),
} as unknown as Config;
vi.mocked(tokenLimit).mockReturnValue(1000);
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(500);
vi.mocked(getInitialChatHistory).mockImplementation(
async (_config, extraHistory) => extraHistory || [],
);
});
afterEach(() => {
vi.restoreAllMocks();
});
it('should return NOOP if history is empty', async () => {
vi.mocked(mockChat.getHistory).mockReturnValue([]);
const result = await service.compress(
mockChat,
mockPromptId,
false,
mockModel,
mockConfig,
false,
);
expect(result.info.compressionStatus).toBe(CompressionStatus.NOOP);
expect(result.newHistory).toBeNull();
});
it('should return NOOP if previously failed and not forced', async () => {
vi.mocked(mockChat.getHistory).mockReturnValue([
{ role: 'user', parts: [{ text: 'hi' }] },
]);
const result = await service.compress(
mockChat,
mockPromptId,
false,
mockModel,
mockConfig,
true,
);
expect(result.info.compressionStatus).toBe(CompressionStatus.NOOP);
expect(result.newHistory).toBeNull();
});
it('should return NOOP if under token threshold and not forced', async () => {
vi.mocked(mockChat.getHistory).mockReturnValue([
{ role: 'user', parts: [{ text: 'hi' }] },
]);
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(600);
vi.mocked(tokenLimit).mockReturnValue(1000);
// Threshold is 0.7 * 1000 = 700. 600 < 700, so NOOP.
const result = await service.compress(
mockChat,
mockPromptId,
false,
mockModel,
mockConfig,
false,
);
expect(result.info.compressionStatus).toBe(CompressionStatus.NOOP);
expect(result.newHistory).toBeNull();
});
it('should compress if over token threshold', async () => {
const history: Content[] = [
{ role: 'user', parts: [{ text: 'msg1' }] },
{ role: 'model', parts: [{ text: 'msg2' }] },
{ role: 'user', parts: [{ text: 'msg3' }] },
{ role: 'model', parts: [{ text: 'msg4' }] },
];
vi.mocked(mockChat.getHistory).mockReturnValue(history);
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(800);
vi.mocked(tokenLimit).mockReturnValue(1000);
const mockGenerateContent = vi.fn().mockResolvedValue({
candidates: [
{
content: {
parts: [{ text: 'Summary' }],
},
},
],
} as unknown as GenerateContentResponse);
vi.mocked(mockConfig.getContentGenerator).mockReturnValue({
generateContent: mockGenerateContent,
} as unknown as ContentGenerator);
const result = await service.compress(
mockChat,
mockPromptId,
false,
mockModel,
mockConfig,
false,
);
expect(result.info.compressionStatus).toBe(CompressionStatus.COMPRESSED);
expect(result.newHistory).not.toBeNull();
expect(result.newHistory![0].parts![0].text).toBe('Summary');
expect(mockGenerateContent).toHaveBeenCalled();
});
it('should force compress even if under threshold', async () => {
const history: Content[] = [
{ role: 'user', parts: [{ text: 'msg1' }] },
{ role: 'model', parts: [{ text: 'msg2' }] },
{ role: 'user', parts: [{ text: 'msg3' }] },
{ role: 'model', parts: [{ text: 'msg4' }] },
];
vi.mocked(mockChat.getHistory).mockReturnValue(history);
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(100);
vi.mocked(tokenLimit).mockReturnValue(1000);
const mockGenerateContent = vi.fn().mockResolvedValue({
candidates: [
{
content: {
parts: [{ text: 'Summary' }],
},
},
],
} as unknown as GenerateContentResponse);
vi.mocked(mockConfig.getContentGenerator).mockReturnValue({
generateContent: mockGenerateContent,
} as unknown as ContentGenerator);
const result = await service.compress(
mockChat,
mockPromptId,
true, // forced
mockModel,
mockConfig,
false,
);
expect(result.info.compressionStatus).toBe(CompressionStatus.COMPRESSED);
expect(result.newHistory).not.toBeNull();
});
it('should return FAILED if new token count is inflated', async () => {
const history: Content[] = [
{ role: 'user', parts: [{ text: 'msg1' }] },
{ role: 'model', parts: [{ text: 'msg2' }] },
];
vi.mocked(mockChat.getHistory).mockReturnValue(history);
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(10);
vi.mocked(tokenLimit).mockReturnValue(1000);
const longSummary = 'a'.repeat(1000); // Long summary to inflate token count
const mockGenerateContent = vi.fn().mockResolvedValue({
candidates: [
{
content: {
parts: [{ text: longSummary }],
},
},
],
} as unknown as GenerateContentResponse);
vi.mocked(mockConfig.getContentGenerator).mockReturnValue({
generateContent: mockGenerateContent,
} as unknown as ContentGenerator);
const result = await service.compress(
mockChat,
mockPromptId,
true,
mockModel,
mockConfig,
false,
);
expect(result.info.compressionStatus).toBe(
CompressionStatus.COMPRESSION_FAILED_INFLATED_TOKEN_COUNT,
);
expect(result.newHistory).toBeNull();
});
it('should return FAILED if summary is empty string', async () => {
const history: Content[] = [
{ role: 'user', parts: [{ text: 'msg1' }] },
{ role: 'model', parts: [{ text: 'msg2' }] },
];
vi.mocked(mockChat.getHistory).mockReturnValue(history);
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(100);
vi.mocked(tokenLimit).mockReturnValue(1000);
const mockGenerateContent = vi.fn().mockResolvedValue({
candidates: [
{
content: {
parts: [{ text: '' }], // Empty summary
},
},
],
} as unknown as GenerateContentResponse);
vi.mocked(mockConfig.getContentGenerator).mockReturnValue({
generateContent: mockGenerateContent,
} as unknown as ContentGenerator);
const result = await service.compress(
mockChat,
mockPromptId,
true,
mockModel,
mockConfig,
false,
);
expect(result.info.compressionStatus).toBe(
CompressionStatus.COMPRESSION_FAILED_EMPTY_SUMMARY,
);
expect(result.newHistory).toBeNull();
expect(result.info.originalTokenCount).toBe(100);
expect(result.info.newTokenCount).toBe(100);
});
it('should return FAILED if summary is only whitespace', async () => {
const history: Content[] = [
{ role: 'user', parts: [{ text: 'msg1' }] },
{ role: 'model', parts: [{ text: 'msg2' }] },
];
vi.mocked(mockChat.getHistory).mockReturnValue(history);
vi.mocked(uiTelemetryService.getLastPromptTokenCount).mockReturnValue(100);
vi.mocked(tokenLimit).mockReturnValue(1000);
const mockGenerateContent = vi.fn().mockResolvedValue({
candidates: [
{
content: {
parts: [{ text: ' \n\t ' }], // Only whitespace
},
},
],
} as unknown as GenerateContentResponse);
vi.mocked(mockConfig.getContentGenerator).mockReturnValue({
generateContent: mockGenerateContent,
} as unknown as ContentGenerator);
const result = await service.compress(
mockChat,
mockPromptId,
true,
mockModel,
mockConfig,
false,
);
expect(result.info.compressionStatus).toBe(
CompressionStatus.COMPRESSION_FAILED_EMPTY_SUMMARY,
);
expect(result.newHistory).toBeNull();
});
});

View File

@@ -1,235 +0,0 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import type { Content } from '@google/genai';
import type { Config } from '../config/config.js';
import type { GeminiChat } from '../core/geminiChat.js';
import { type ChatCompressionInfo, CompressionStatus } from '../core/turn.js';
import { uiTelemetryService } from '../telemetry/uiTelemetry.js';
import { tokenLimit } from '../core/tokenLimits.js';
import { getCompressionPrompt } from '../core/prompts.js';
import { getResponseText } from '../utils/partUtils.js';
import { logChatCompression } from '../telemetry/loggers.js';
import { makeChatCompressionEvent } from '../telemetry/types.js';
import { getInitialChatHistory } from '../utils/environmentContext.js';
/**
* Threshold for compression token count as a fraction of the model's token limit.
* If the chat history exceeds this threshold, it will be compressed.
*/
export const COMPRESSION_TOKEN_THRESHOLD = 0.7;
/**
* The fraction of the latest chat history to keep. A value of 0.3
* means that only the last 30% of the chat history will be kept after compression.
*/
export const COMPRESSION_PRESERVE_THRESHOLD = 0.3;
/**
* Returns the index of the oldest item to keep when compressing. May return
* contents.length which indicates that everything should be compressed.
*
* Exported for testing purposes.
*/
export function findCompressSplitPoint(
contents: Content[],
fraction: number,
): number {
if (fraction <= 0 || fraction >= 1) {
throw new Error('Fraction must be between 0 and 1');
}
const charCounts = contents.map((content) => JSON.stringify(content).length);
const totalCharCount = charCounts.reduce((a, b) => a + b, 0);
const targetCharCount = totalCharCount * fraction;
let lastSplitPoint = 0; // 0 is always valid (compress nothing)
let cumulativeCharCount = 0;
for (let i = 0; i < contents.length; i++) {
const content = contents[i];
if (
content.role === 'user' &&
!content.parts?.some((part) => !!part.functionResponse)
) {
if (cumulativeCharCount >= targetCharCount) {
return i;
}
lastSplitPoint = i;
}
cumulativeCharCount += charCounts[i];
}
// We found no split points after targetCharCount.
// Check if it's safe to compress everything.
const lastContent = contents[contents.length - 1];
if (
lastContent?.role === 'model' &&
!lastContent?.parts?.some((part) => part.functionCall)
) {
return contents.length;
}
// Can't compress everything so just compress at last splitpoint.
return lastSplitPoint;
}
export class ChatCompressionService {
async compress(
chat: GeminiChat,
promptId: string,
force: boolean,
model: string,
config: Config,
hasFailedCompressionAttempt: boolean,
): Promise<{ newHistory: Content[] | null; info: ChatCompressionInfo }> {
const curatedHistory = chat.getHistory(true);
// Regardless of `force`, don't do anything if the history is empty.
if (
curatedHistory.length === 0 ||
(hasFailedCompressionAttempt && !force)
) {
return {
newHistory: null,
info: {
originalTokenCount: 0,
newTokenCount: 0,
compressionStatus: CompressionStatus.NOOP,
},
};
}
const originalTokenCount = uiTelemetryService.getLastPromptTokenCount();
const contextPercentageThreshold =
config.getChatCompression()?.contextPercentageThreshold;
// Don't compress if not forced and we are under the limit.
if (!force) {
const threshold =
contextPercentageThreshold ?? COMPRESSION_TOKEN_THRESHOLD;
if (originalTokenCount < threshold * tokenLimit(model)) {
return {
newHistory: null,
info: {
originalTokenCount,
newTokenCount: originalTokenCount,
compressionStatus: CompressionStatus.NOOP,
},
};
}
}
const splitPoint = findCompressSplitPoint(
curatedHistory,
1 - COMPRESSION_PRESERVE_THRESHOLD,
);
const historyToCompress = curatedHistory.slice(0, splitPoint);
const historyToKeep = curatedHistory.slice(splitPoint);
if (historyToCompress.length === 0) {
return {
newHistory: null,
info: {
originalTokenCount,
newTokenCount: originalTokenCount,
compressionStatus: CompressionStatus.NOOP,
},
};
}
const summaryResponse = await config.getContentGenerator().generateContent(
{
model,
contents: [
...historyToCompress,
{
role: 'user',
parts: [
{
text: 'First, reason in your scratchpad. Then, generate the <state_snapshot>.',
},
],
},
],
config: {
systemInstruction: getCompressionPrompt(),
},
},
promptId,
);
const summary = getResponseText(summaryResponse) ?? '';
const isSummaryEmpty = !summary || summary.trim().length === 0;
let newTokenCount = originalTokenCount;
let extraHistory: Content[] = [];
if (!isSummaryEmpty) {
extraHistory = [
{
role: 'user',
parts: [{ text: summary }],
},
{
role: 'model',
parts: [{ text: 'Got it. Thanks for the additional context!' }],
},
...historyToKeep,
];
// Use a shared utility to construct the initial history for an accurate token count.
const fullNewHistory = await getInitialChatHistory(config, extraHistory);
// Estimate token count 1 token ≈ 4 characters
newTokenCount = Math.floor(
fullNewHistory.reduce(
(total, content) => total + JSON.stringify(content).length,
0,
) / 4,
);
}
logChatCompression(
config,
makeChatCompressionEvent({
tokens_before: originalTokenCount,
tokens_after: newTokenCount,
}),
);
if (isSummaryEmpty) {
return {
newHistory: null,
info: {
originalTokenCount,
newTokenCount: originalTokenCount,
compressionStatus: CompressionStatus.COMPRESSION_FAILED_EMPTY_SUMMARY,
},
};
} else if (newTokenCount > originalTokenCount) {
return {
newHistory: null,
info: {
originalTokenCount,
newTokenCount,
compressionStatus:
CompressionStatus.COMPRESSION_FAILED_INFLATED_TOKEN_COUNT,
},
};
} else {
uiTelemetryService.setLastPromptTokenCount(newTokenCount);
return {
newHistory: extraHistory,
info: {
originalTokenCount,
newTokenCount,
compressionStatus: CompressionStatus.COMPRESSED,
},
};
}
}
}

View File

@@ -62,10 +62,9 @@ export type {
SubAgentToolResultEvent,
SubAgentFinishEvent,
SubAgentErrorEvent,
SubAgentApprovalRequestEvent,
} from './subagent-events.js';
export { SubAgentEventEmitter, SubAgentEventType } from './subagent-events.js';
export { SubAgentEventEmitter } from './subagent-events.js';
// Statistics and formatting
export type {

View File

@@ -32,6 +32,7 @@ import { GeminiChat } from '../core/geminiChat.js';
import { executeToolCall } from '../core/nonInteractiveToolExecutor.js';
import type { ToolRegistry } from '../tools/tool-registry.js';
import { type AnyDeclarativeTool } from '../tools/tools.js';
import { getEnvironmentContext } from '../utils/environmentContext.js';
import { ContextState, SubAgentScope } from './subagent.js';
import type {
ModelConfig,
@@ -43,20 +44,7 @@ import { SubagentTerminateMode } from './types.js';
vi.mock('../core/geminiChat.js');
vi.mock('../core/contentGenerator.js');
vi.mock('../utils/environmentContext.js', () => ({
getEnvironmentContext: vi.fn().mockResolvedValue([{ text: 'Env Context' }]),
getInitialChatHistory: vi.fn(async (_config, extraHistory) => [
{
role: 'user',
parts: [{ text: 'Env Context' }],
},
{
role: 'model',
parts: [{ text: 'Got it. Thanks for the context!' }],
},
...(extraHistory ?? []),
]),
}));
vi.mock('../utils/environmentContext.js');
vi.mock('../core/nonInteractiveToolExecutor.js');
vi.mock('../ide/ide-client.js');
vi.mock('../core/client.js');
@@ -186,6 +174,9 @@ describe('subagent.ts', () => {
beforeEach(async () => {
vi.clearAllMocks();
vi.mocked(getEnvironmentContext).mockResolvedValue([
{ text: 'Env Context' },
]);
vi.mocked(createContentGenerator).mockResolvedValue({
getGenerativeModel: vi.fn(),
// eslint-disable-next-line @typescript-eslint/no-explicit-any

View File

@@ -16,7 +16,7 @@ import type {
ToolConfirmationOutcome,
ToolCallConfirmationDetails,
} from '../tools/tools.js';
import { getInitialChatHistory } from '../utils/environmentContext.js';
import { getEnvironmentContext } from '../utils/environmentContext.js';
import type {
Content,
Part,
@@ -807,7 +807,11 @@ export class SubAgentScope {
);
}
const envHistory = await getInitialChatHistory(this.runtimeContext);
const envParts = await getEnvironmentContext(this.runtimeContext);
const envHistory: Content[] = [
{ role: 'user', parts: envParts },
{ role: 'model', parts: [{ text: 'Got it. Thanks for the context!' }] },
];
const start_history = [
...envHistory,

View File

@@ -88,6 +88,17 @@ describe('GlobTool', () => {
expect(result.returnDisplay).toBe('Found 2 matching file(s)');
});
it('should find files case-sensitively when case_sensitive is true', async () => {
const params: GlobToolParams = { pattern: '*.txt', case_sensitive: true };
const invocation = globTool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.llmContent).toContain('Found 1 file(s)');
expect(result.llmContent).toContain(path.join(tempRootDir, 'fileA.txt'));
expect(result.llmContent).not.toContain(
path.join(tempRootDir, 'FileB.TXT'),
);
});
it('should find files case-insensitively by default (pattern: *.TXT)', async () => {
const params: GlobToolParams = { pattern: '*.TXT' };
const invocation = globTool.build(params);
@@ -97,6 +108,18 @@ describe('GlobTool', () => {
expect(result.llmContent).toContain(path.join(tempRootDir, 'FileB.TXT'));
});
it('should find files case-insensitively when case_sensitive is false (pattern: *.TXT)', async () => {
const params: GlobToolParams = {
pattern: '*.TXT',
case_sensitive: false,
};
const invocation = globTool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.llmContent).toContain('Found 2 file(s)');
expect(result.llmContent).toContain(path.join(tempRootDir, 'fileA.txt'));
expect(result.llmContent).toContain(path.join(tempRootDir, 'FileB.TXT'));
});
it('should find files using a pattern that includes a subdirectory', async () => {
const params: GlobToolParams = { pattern: 'sub/*.md' };
const invocation = globTool.build(params);
@@ -184,7 +207,7 @@ describe('GlobTool', () => {
const filesListed = llmContent
.trim()
.split(/\r?\n/)
.slice(2)
.slice(1)
.map((line) => line.trim())
.filter(Boolean);
@@ -197,13 +220,14 @@ describe('GlobTool', () => {
);
});
it('should return error if path is outside workspace', async () => {
it('should return a PATH_NOT_IN_WORKSPACE error if path is outside workspace', async () => {
// Bypassing validation to test execute method directly
vi.spyOn(globTool, 'validateToolParams').mockReturnValue(null);
const params: GlobToolParams = { pattern: '*.txt', path: '/etc' };
const invocation = globTool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.returnDisplay).toBe('Error: Path is not within workspace');
expect(result.error?.type).toBe(ToolErrorType.PATH_NOT_IN_WORKSPACE);
expect(result.returnDisplay).toBe('Path is not within workspace');
});
it('should return a GLOB_EXECUTION_ERROR on glob failure', async () => {
@@ -231,6 +255,15 @@ describe('GlobTool', () => {
expect(globTool.validateToolParams(params)).toBeNull();
});
it('should return null for valid parameters (pattern, path, and case_sensitive)', () => {
const params: GlobToolParams = {
pattern: '*.js',
path: 'sub',
case_sensitive: true,
};
expect(globTool.validateToolParams(params)).toBeNull();
});
it('should return error if pattern is missing (schema validation)', () => {
// Need to correctly define this as an object without pattern
const params = { path: '.' };
@@ -264,6 +297,16 @@ describe('GlobTool', () => {
);
});
it('should return error if case_sensitive is provided but is not a boolean', () => {
const params = {
pattern: '*.ts',
case_sensitive: 'true',
} as unknown as GlobToolParams; // Force incorrect type
expect(globTool.validateToolParams(params)).toBe(
'params/case_sensitive must be boolean',
);
});
it("should return error if search path resolves outside the tool's root directory", () => {
// Create a globTool instance specifically for this test, with a deeper root
tempRootDir = path.join(tempRootDir, 'sub');
@@ -276,7 +319,7 @@ describe('GlobTool', () => {
path: '../../../../../../../../../../tmp', // Definitely outside
};
expect(specificGlobTool.validateToolParams(paramsOutside)).toContain(
'Path is not within workspace',
'resolves outside the allowed workspace directories',
);
});
@@ -286,14 +329,14 @@ describe('GlobTool', () => {
path: 'nonexistent_subdir',
};
expect(globTool.validateToolParams(params)).toContain(
'Path does not exist',
'Search path does not exist',
);
});
it('should return error if specified search path is a file, not a directory', async () => {
const params: GlobToolParams = { pattern: '*.txt', path: 'fileA.txt' };
expect(globTool.validateToolParams(params)).toContain(
'Path is not a directory',
'Search path is not a directory',
);
});
});
@@ -305,10 +348,20 @@ describe('GlobTool', () => {
expect(globTool.validateToolParams(validPath)).toBeNull();
expect(globTool.validateToolParams(invalidPath)).toContain(
'Path is not within workspace',
'resolves outside the allowed workspace directories',
);
});
it('should provide clear error messages when path is outside workspace', () => {
const invalidPath = { pattern: '*.ts', path: '/etc' };
const error = globTool.validateToolParams(invalidPath);
expect(error).toContain(
'resolves outside the allowed workspace directories',
);
expect(error).toContain(tempRootDir);
});
it('should work with paths in workspace subdirectories', async () => {
const params: GlobToolParams = { pattern: '*.md', path: 'sub' };
const invocation = globTool.build(params);
@@ -364,123 +417,47 @@ describe('GlobTool', () => {
expect(result.llmContent).toContain('Found 3 file(s)'); // fileA.txt, FileB.TXT, b.notignored.txt
expect(result.llmContent).not.toContain('a.qwenignored.txt');
});
});
describe('file count truncation', () => {
it('should truncate results when more than 100 files are found', async () => {
// Create 150 test files
for (let i = 1; i <= 150; i++) {
await fs.writeFile(
path.join(tempRootDir, `file${i}.trunctest`),
`content${i}`,
);
}
const params: GlobToolParams = { pattern: '*.trunctest' };
const invocation = globTool.build(params);
const result = await invocation.execute(abortSignal);
const llmContent = partListUnionToString(result.llmContent);
// Should report all 150 files found
expect(llmContent).toContain('Found 150 file(s)');
// Should include truncation notice
expect(llmContent).toContain('[50 files truncated] ...');
// Count the number of .trunctest files mentioned in the output
const fileMatches = llmContent.match(/file\d+\.trunctest/g);
expect(fileMatches).toBeDefined();
expect(fileMatches?.length).toBe(100);
// returnDisplay should indicate truncation
expect(result.returnDisplay).toBe(
'Found 150 matching file(s) (truncated)',
it('should not respect .gitignore when respect_git_ignore is false', async () => {
await fs.writeFile(path.join(tempRootDir, '.gitignore'), '*.ignored.txt');
await fs.writeFile(
path.join(tempRootDir, 'a.ignored.txt'),
'ignored content',
);
});
it('should not truncate when exactly 100 files are found', async () => {
// Create exactly 100 test files
for (let i = 1; i <= 100; i++) {
await fs.writeFile(
path.join(tempRootDir, `exact${i}.trunctest`),
`content${i}`,
);
}
const params: GlobToolParams = { pattern: '*.trunctest' };
const params: GlobToolParams = {
pattern: '*.txt',
respect_git_ignore: false,
};
const invocation = globTool.build(params);
const result = await invocation.execute(abortSignal);
// Should report all 100 files found
expect(result.llmContent).toContain('Found 100 file(s)');
// Should NOT include truncation notice
expect(result.llmContent).not.toContain('truncated');
// Should show all 100 files
expect(result.llmContent).toContain('exact1.trunctest');
expect(result.llmContent).toContain('exact100.trunctest');
// returnDisplay should NOT indicate truncation
expect(result.returnDisplay).toBe('Found 100 matching file(s)');
expect(result.llmContent).toContain('Found 3 file(s)'); // fileA.txt, FileB.TXT, a.ignored.txt
expect(result.llmContent).toContain('a.ignored.txt');
});
it('should not truncate when fewer than 100 files are found', async () => {
// Create 50 test files
for (let i = 1; i <= 50; i++) {
await fs.writeFile(
path.join(tempRootDir, `small${i}.trunctest`),
`content${i}`,
);
}
it('should not respect .qwenignore when respect_qwen_ignore is false', async () => {
await fs.writeFile(
path.join(tempRootDir, '.qwenignore'),
'*.qwenignored.txt',
);
await fs.writeFile(
path.join(tempRootDir, 'a.qwenignored.txt'),
'ignored content',
);
const params: GlobToolParams = { pattern: '*.trunctest' };
// Recreate the tool to pick up the new .qwenignore file
globTool = new GlobTool(mockConfig);
const params: GlobToolParams = {
pattern: '*.txt',
respect_qwen_ignore: false,
};
const invocation = globTool.build(params);
const result = await invocation.execute(abortSignal);
// Should report all 50 files found
expect(result.llmContent).toContain('Found 50 file(s)');
// Should NOT include truncation notice
expect(result.llmContent).not.toContain('truncated');
// returnDisplay should NOT indicate truncation
expect(result.returnDisplay).toBe('Found 50 matching file(s)');
});
it('should use correct singular/plural in truncation message for 1 file truncated', async () => {
// Create 101 test files (will truncate 1 file)
for (let i = 1; i <= 101; i++) {
await fs.writeFile(
path.join(tempRootDir, `singular${i}.trunctest`),
`content${i}`,
);
}
const params: GlobToolParams = { pattern: '*.trunctest' };
const invocation = globTool.build(params);
const result = await invocation.execute(abortSignal);
// Should use singular "file" for 1 truncated file
expect(result.llmContent).toContain('[1 file truncated] ...');
expect(result.llmContent).not.toContain('[1 files truncated]');
});
it('should use correct plural in truncation message for multiple files truncated', async () => {
// Create 105 test files (will truncate 5 files)
for (let i = 1; i <= 105; i++) {
await fs.writeFile(
path.join(tempRootDir, `plural${i}.trunctest`),
`content${i}`,
);
}
const params: GlobToolParams = { pattern: '*.trunctest' };
const invocation = globTool.build(params);
const result = await invocation.execute(abortSignal);
// Should use plural "files" for multiple truncated files
expect(result.llmContent).toContain('[5 files truncated] ...');
expect(result.llmContent).toContain('Found 3 file(s)'); // fileA.txt, FileB.TXT, a.qwenignored.txt
expect(result.llmContent).toContain('a.qwenignored.txt');
});
});
});

View File

@@ -10,17 +10,10 @@ import { glob, escape } from 'glob';
import type { ToolInvocation, ToolResult } from './tools.js';
import { BaseDeclarativeTool, BaseToolInvocation, Kind } from './tools.js';
import { ToolNames } from './tool-names.js';
import { resolveAndValidatePath } from '../utils/paths.js';
import { shortenPath, makeRelative } from '../utils/paths.js';
import { type Config } from '../config/config.js';
import {
DEFAULT_FILE_FILTERING_OPTIONS,
type FileFilteringOptions,
} from '../config/constants.js';
import { DEFAULT_FILE_FILTERING_OPTIONS } from '../config/constants.js';
import { ToolErrorType } from './tool-error.js';
import { getErrorMessage } from '../utils/errors.js';
import type { FileDiscoveryService } from '../services/fileDiscoveryService.js';
const MAX_FILE_COUNT = 100;
// Subset of 'Path' interface provided by 'glob' that we can implement for testing
export interface GlobPath {
@@ -71,68 +64,118 @@ export interface GlobToolParams {
* The directory to search in (optional, defaults to current directory)
*/
path?: string;
/**
* Whether the search should be case-sensitive (optional, defaults to false)
*/
case_sensitive?: boolean;
/**
* Whether to respect .gitignore patterns (optional, defaults to true)
*/
respect_git_ignore?: boolean;
/**
* Whether to respect .qwenignore patterns (optional, defaults to true)
*/
respect_qwen_ignore?: boolean;
}
class GlobToolInvocation extends BaseToolInvocation<
GlobToolParams,
ToolResult
> {
private fileService: FileDiscoveryService;
constructor(
private config: Config,
params: GlobToolParams,
) {
super(params);
this.fileService = config.getFileService();
}
getDescription(): string {
let description = `'${this.params.pattern}'`;
if (this.params.path) {
description += ` in path '${this.params.path}'`;
const searchDir = path.resolve(
this.config.getTargetDir(),
this.params.path || '.',
);
const relativePath = makeRelative(searchDir, this.config.getTargetDir());
description += ` within ${shortenPath(relativePath)}`;
}
return description;
}
async execute(signal: AbortSignal): Promise<ToolResult> {
try {
// Default to target directory if no path is provided
const searchDirAbs = resolveAndValidatePath(
this.config,
this.params.path,
);
const searchLocationDescription = this.params.path
? `within ${searchDirAbs}`
: `in the workspace directory`;
const workspaceContext = this.config.getWorkspaceContext();
const workspaceDirectories = workspaceContext.getDirectories();
// Collect entries from the search directory
let pattern = this.params.pattern;
const fullPath = path.join(searchDirAbs, pattern);
if (fs.existsSync(fullPath)) {
pattern = escape(pattern);
// If a specific path is provided, resolve it and check if it's within workspace
let searchDirectories: readonly string[];
if (this.params.path) {
const searchDirAbsolute = path.resolve(
this.config.getTargetDir(),
this.params.path,
);
if (!workspaceContext.isPathWithinWorkspace(searchDirAbsolute)) {
const rawError = `Error: Path "${this.params.path}" is not within any workspace directory`;
return {
llmContent: rawError,
returnDisplay: `Path is not within workspace`,
error: {
message: rawError,
type: ToolErrorType.PATH_NOT_IN_WORKSPACE,
},
};
}
searchDirectories = [searchDirAbsolute];
} else {
// Search across all workspace directories
searchDirectories = workspaceDirectories;
}
const allEntries = (await glob(pattern, {
cwd: searchDirAbs,
withFileTypes: true,
nodir: true,
stat: true,
nocase: true,
dot: true,
follow: false,
signal,
})) as GlobPath[];
// Get centralized file discovery service
const fileDiscovery = this.config.getFileService();
// Collect entries from all search directories
const allEntries: GlobPath[] = [];
for (const searchDir of searchDirectories) {
let pattern = this.params.pattern;
const fullPath = path.join(searchDir, pattern);
if (fs.existsSync(fullPath)) {
pattern = escape(pattern);
}
const entries = (await glob(pattern, {
cwd: searchDir,
withFileTypes: true,
nodir: true,
stat: true,
nocase: !this.params.case_sensitive,
dot: true,
ignore: this.config.getFileExclusions().getGlobExcludes(),
follow: false,
signal,
})) as GlobPath[];
allEntries.push(...entries);
}
const relativePaths = allEntries.map((p) =>
path.relative(this.config.getTargetDir(), p.fullpath()),
);
const { filteredPaths } = this.fileService.filterFilesWithReport(
relativePaths,
this.getFileFilteringOptions(),
);
const { filteredPaths, gitIgnoredCount, qwenIgnoredCount } =
fileDiscovery.filterFilesWithReport(relativePaths, {
respectGitIgnore:
this.params?.respect_git_ignore ??
this.config.getFileFilteringOptions().respectGitIgnore ??
DEFAULT_FILE_FILTERING_OPTIONS.respectGitIgnore,
respectQwenIgnore:
this.params?.respect_qwen_ignore ??
this.config.getFileFilteringOptions().respectQwenIgnore ??
DEFAULT_FILE_FILTERING_OPTIONS.respectQwenIgnore,
});
const filteredAbsolutePaths = new Set(
filteredPaths.map((p) => path.resolve(this.config.getTargetDir(), p)),
@@ -143,8 +186,20 @@ class GlobToolInvocation extends BaseToolInvocation<
);
if (!filteredEntries || filteredEntries.length === 0) {
let message = `No files found matching pattern "${this.params.pattern}"`;
if (searchDirectories.length === 1) {
message += ` within ${searchDirectories[0]}`;
} else {
message += ` within ${searchDirectories.length} workspace directories`;
}
if (gitIgnoredCount > 0) {
message += ` (${gitIgnoredCount} files were git-ignored)`;
}
if (qwenIgnoredCount > 0) {
message += ` (${qwenIgnoredCount} files were qwen-ignored)`;
}
return {
llmContent: `No files found matching pattern "${this.params.pattern}" ${searchLocationDescription}`,
llmContent: message,
returnDisplay: `No files found`,
};
}
@@ -160,32 +215,29 @@ class GlobToolInvocation extends BaseToolInvocation<
oneDayInMs,
);
const totalFileCount = sortedEntries.length;
const truncated = totalFileCount > MAX_FILE_COUNT;
// Limit to MAX_FILE_COUNT if needed
const entriesToShow = truncated
? sortedEntries.slice(0, MAX_FILE_COUNT)
: sortedEntries;
const sortedAbsolutePaths = entriesToShow.map((entry) =>
const sortedAbsolutePaths = sortedEntries.map((entry) =>
entry.fullpath(),
);
const fileListDescription = sortedAbsolutePaths.join('\n');
const fileCount = sortedAbsolutePaths.length;
let resultMessage = `Found ${totalFileCount} file(s) matching "${this.params.pattern}" ${searchLocationDescription}`;
resultMessage += `, sorted by modification time (newest first):\n---\n${fileListDescription}`;
// Add truncation notice if needed
if (truncated) {
const omittedFiles = totalFileCount - MAX_FILE_COUNT;
const fileTerm = omittedFiles === 1 ? 'file' : 'files';
resultMessage += `\n---\n[${omittedFiles} ${fileTerm} truncated] ...`;
let resultMessage = `Found ${fileCount} file(s) matching "${this.params.pattern}"`;
if (searchDirectories.length === 1) {
resultMessage += ` within ${searchDirectories[0]}`;
} else {
resultMessage += ` across ${searchDirectories.length} workspace directories`;
}
if (gitIgnoredCount > 0) {
resultMessage += ` (${gitIgnoredCount} additional files were git-ignored)`;
}
if (qwenIgnoredCount > 0) {
resultMessage += ` (${qwenIgnoredCount} additional files were qwen-ignored)`;
}
resultMessage += `, sorted by modification time (newest first):\n${fileListDescription}`;
return {
llmContent: resultMessage,
returnDisplay: `Found ${totalFileCount} matching file(s)${truncated ? ' (truncated)' : ''}`,
returnDisplay: `Found ${fileCount} matching file(s)`,
};
} catch (error) {
const errorMessage =
@@ -194,7 +246,7 @@ class GlobToolInvocation extends BaseToolInvocation<
const rawError = `Error during glob search operation: ${errorMessage}`;
return {
llmContent: rawError,
returnDisplay: `Error: ${errorMessage || 'An unexpected error occurred.'}`,
returnDisplay: `Error: An unexpected error occurred.`,
error: {
message: rawError,
type: ToolErrorType.GLOB_EXECUTION_ERROR,
@@ -202,18 +254,6 @@ class GlobToolInvocation extends BaseToolInvocation<
};
}
}
private getFileFilteringOptions(): FileFilteringOptions {
const options = this.config.getFileFilteringOptions?.();
return {
respectGitIgnore:
options?.respectGitIgnore ??
DEFAULT_FILE_FILTERING_OPTIONS.respectGitIgnore,
respectQwenIgnore:
options?.respectQwenIgnore ??
DEFAULT_FILE_FILTERING_OPTIONS.respectQwenIgnore,
};
}
}
/**
@@ -226,19 +266,35 @@ export class GlobTool extends BaseDeclarativeTool<GlobToolParams, ToolResult> {
super(
GlobTool.Name,
'FindFiles',
'Fast file pattern matching tool that works with any codebase size\n- Supports glob patterns like "**/*.js" or "src/**/*.ts"\n- Returns matching file paths sorted by modification time\n- Use this tool when you need to find files by name patterns\n- When you are doing an open ended search that may require multiple rounds of globbing and grepping, use the Agent tool instead\n- You have the capability to call multiple tools in a single response. It is always better to speculatively perform multiple searches as a batch that are potentially useful.',
'Efficiently finds files matching specific glob patterns (e.g., `src/**/*.ts`, `**/*.md`), returning absolute paths sorted by modification time (newest first). Ideal for quickly locating files based on their name or path structure, especially in large codebases.',
Kind.Search,
{
properties: {
pattern: {
description: 'The glob pattern to match files against',
description:
"The glob pattern to match against (e.g., '**/*.py', 'docs/*.md').",
type: 'string',
},
path: {
description:
'The directory to search in. If not specified, the current working directory will be used. IMPORTANT: Omit this field to use the default directory. DO NOT enter "undefined" or "null" - simply omit it for the default behavior. Must be a valid directory path if provided.',
'Optional: The absolute path to the directory to search within. If omitted, searches the root directory.',
type: 'string',
},
case_sensitive: {
description:
'Optional: Whether the search should be case-sensitive. Defaults to false.',
type: 'boolean',
},
respect_git_ignore: {
description:
'Optional: Whether to respect .gitignore patterns when finding files. Only available in git repositories. Defaults to true.',
type: 'boolean',
},
respect_qwen_ignore: {
description:
'Optional: Whether to respect .qwenignore patterns when finding files. Defaults to true.',
type: 'boolean',
},
},
required: ['pattern'],
type: 'object',
@@ -252,6 +308,29 @@ export class GlobTool extends BaseDeclarativeTool<GlobToolParams, ToolResult> {
protected override validateToolParamValues(
params: GlobToolParams,
): string | null {
const searchDirAbsolute = path.resolve(
this.config.getTargetDir(),
params.path || '.',
);
const workspaceContext = this.config.getWorkspaceContext();
if (!workspaceContext.isPathWithinWorkspace(searchDirAbsolute)) {
const directories = workspaceContext.getDirectories();
return `Search path ("${searchDirAbsolute}") resolves outside the allowed workspace directories: ${directories.join(', ')}`;
}
const targetDir = searchDirAbsolute || this.config.getTargetDir();
try {
if (!fs.existsSync(targetDir)) {
return `Search path does not exist ${targetDir}`;
}
if (!fs.statSync(targetDir).isDirectory()) {
return `Search path is not a directory: ${targetDir}`;
}
} catch (e: unknown) {
return `Error accessing search path: ${e}`;
}
if (
!params.pattern ||
typeof params.pattern !== 'string' ||
@@ -260,15 +339,6 @@ export class GlobTool extends BaseDeclarativeTool<GlobToolParams, ToolResult> {
return "The 'pattern' parameter cannot be empty.";
}
// Only validate path if one is provided
if (params.path) {
try {
resolveAndValidatePath(this.config, params.path);
} catch (error) {
return getErrorMessage(error);
}
}
return null;
}

View File

@@ -84,11 +84,11 @@ describe('GrepTool', () => {
expect(grepTool.validateToolParams(params)).toBeNull();
});
it('should return null for valid params (pattern, path, and glob)', () => {
it('should return null for valid params (pattern, path, and include)', () => {
const params: GrepToolParams = {
pattern: 'hello',
path: '.',
glob: '*.txt',
include: '*.txt',
};
expect(grepTool.validateToolParams(params)).toBeNull();
});
@@ -111,7 +111,7 @@ describe('GrepTool', () => {
const params: GrepToolParams = { pattern: 'hello', path: 'nonexistent' };
// Check for the core error message, as the full path might vary
expect(grepTool.validateToolParams(params)).toContain(
'Path does not exist:',
'Failed to access path stats for',
);
expect(grepTool.validateToolParams(params)).toContain('nonexistent');
});
@@ -155,8 +155,8 @@ describe('GrepTool', () => {
expect(result.returnDisplay).toBe('Found 1 match');
});
it('should find matches with a glob filter', async () => {
const params: GrepToolParams = { pattern: 'hello', glob: '*.js' };
it('should find matches with an include glob', async () => {
const params: GrepToolParams = { pattern: 'hello', include: '*.js' };
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.llmContent).toContain(
@@ -169,7 +169,7 @@ describe('GrepTool', () => {
expect(result.returnDisplay).toBe('Found 1 match');
});
it('should find matches with a glob filter and path', async () => {
it('should find matches with an include glob and path', async () => {
await fs.writeFile(
path.join(tempRootDir, 'sub', 'another.js'),
'const greeting = "hello";',
@@ -177,7 +177,7 @@ describe('GrepTool', () => {
const params: GrepToolParams = {
pattern: 'hello',
path: 'sub',
glob: '*.js',
include: '*.js',
};
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
@@ -244,23 +244,59 @@ describe('GrepTool', () => {
describe('multi-directory workspace', () => {
it('should search across all workspace directories when no path is specified', async () => {
// The new implementation searches only in the target directory (first workspace directory)
// when no path is specified, not across all workspace directories
const params: GrepToolParams = { pattern: 'world' };
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
// Should find matches in the target directory only
expect(result.llmContent).toContain(
'Found 3 matches for pattern "world" in the workspace directory',
// Create additional directory with test files
const secondDir = await fs.mkdtemp(
path.join(os.tmpdir(), 'grep-tool-second-'),
);
await fs.writeFile(
path.join(secondDir, 'other.txt'),
'hello from second directory\nworld in second',
);
await fs.writeFile(
path.join(secondDir, 'another.js'),
'function world() { return "test"; }',
);
// Matches from target directory
// Create a mock config with multiple directories
const multiDirConfig = {
getTargetDir: () => tempRootDir,
getWorkspaceContext: () =>
createMockWorkspaceContext(tempRootDir, [secondDir]),
getFileExclusions: () => ({
getGlobExcludes: () => [],
}),
} as unknown as Config;
const multiDirGrepTool = new GrepTool(multiDirConfig);
const params: GrepToolParams = { pattern: 'world' };
const invocation = multiDirGrepTool.build(params);
const result = await invocation.execute(abortSignal);
// Should find matches in both directories
expect(result.llmContent).toContain(
'Found 5 matches for pattern "world"',
);
// Matches from first directory
expect(result.llmContent).toContain('fileA.txt');
expect(result.llmContent).toContain('L1: hello world');
expect(result.llmContent).toContain('L2: second line with world');
expect(result.llmContent).toContain('fileC.txt');
expect(result.llmContent).toContain('L1: another world in sub dir');
// Matches from second directory (with directory name prefix)
const secondDirName = path.basename(secondDir);
expect(result.llmContent).toContain(
`File: ${path.join(secondDirName, 'other.txt')}`,
);
expect(result.llmContent).toContain('L2: world in second');
expect(result.llmContent).toContain(
`File: ${path.join(secondDirName, 'another.js')}`,
);
expect(result.llmContent).toContain('L1: function world()');
// Clean up
await fs.rm(secondDir, { recursive: true, force: true });
});
it('should search only specified path within workspace directories', async () => {
@@ -310,18 +346,16 @@ describe('GrepTool', () => {
it('should generate correct description with pattern only', () => {
const params: GrepToolParams = { pattern: 'testPattern' };
const invocation = grepTool.build(params);
expect(invocation.getDescription()).toBe("'testPattern' in path './'");
expect(invocation.getDescription()).toBe("'testPattern'");
});
it('should generate correct description with pattern and glob', () => {
it('should generate correct description with pattern and include', () => {
const params: GrepToolParams = {
pattern: 'testPattern',
glob: '*.ts',
include: '*.ts',
};
const invocation = grepTool.build(params);
expect(invocation.getDescription()).toBe(
"'testPattern' in path './' (filter: '*.ts')",
);
expect(invocation.getDescription()).toBe("'testPattern' in *.ts");
});
it('should generate correct description with pattern and path', async () => {
@@ -332,37 +366,49 @@ describe('GrepTool', () => {
path: path.join('src', 'app'),
};
const invocation = grepTool.build(params);
expect(invocation.getDescription()).toContain(
"'testPattern' in path 'src",
);
expect(invocation.getDescription()).toContain("app'");
// The path will be relative to the tempRootDir, so we check for containment.
expect(invocation.getDescription()).toContain("'testPattern' within");
expect(invocation.getDescription()).toContain(path.join('src', 'app'));
});
it('should indicate searching workspace directory when no path specified', () => {
it('should indicate searching across all workspace directories when no path specified', () => {
// Create a mock config with multiple directories
const multiDirConfig = {
getTargetDir: () => tempRootDir,
getWorkspaceContext: () =>
createMockWorkspaceContext(tempRootDir, ['/another/dir']),
getFileExclusions: () => ({
getGlobExcludes: () => [],
}),
} as unknown as Config;
const multiDirGrepTool = new GrepTool(multiDirConfig);
const params: GrepToolParams = { pattern: 'testPattern' };
const invocation = grepTool.build(params);
expect(invocation.getDescription()).toBe("'testPattern' in path './'");
const invocation = multiDirGrepTool.build(params);
expect(invocation.getDescription()).toBe(
"'testPattern' across all workspace directories",
);
});
it('should generate correct description with pattern, glob, and path', async () => {
it('should generate correct description with pattern, include, and path', async () => {
const dirPath = path.join(tempRootDir, 'src', 'app');
await fs.mkdir(dirPath, { recursive: true });
const params: GrepToolParams = {
pattern: 'testPattern',
glob: '*.ts',
include: '*.ts',
path: path.join('src', 'app'),
};
const invocation = grepTool.build(params);
expect(invocation.getDescription()).toContain(
"'testPattern' in path 'src",
"'testPattern' in *.ts within",
);
expect(invocation.getDescription()).toContain("(filter: '*.ts')");
expect(invocation.getDescription()).toContain(path.join('src', 'app'));
});
it('should use ./ for root path in description', () => {
const params: GrepToolParams = { pattern: 'testPattern', path: '.' };
const invocation = grepTool.build(params);
expect(invocation.getDescription()).toBe("'testPattern' in path '.'");
expect(invocation.getDescription()).toBe("'testPattern' within ./");
});
});
@@ -376,50 +422,67 @@ describe('GrepTool', () => {
}
});
it('should show all results when no limit is specified', async () => {
it('should limit results to default 20 matches', async () => {
const params: GrepToolParams = { pattern: 'testword' };
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
// New implementation shows all matches when limit is not specified
expect(result.llmContent).toContain('Found 30 matches');
expect(result.llmContent).not.toContain('truncated');
expect(result.returnDisplay).toBe('Found 30 matches');
expect(result.llmContent).toContain('Found 20 matches');
expect(result.llmContent).toContain(
'showing first 20 of 30+ total matches',
);
expect(result.llmContent).toContain('WARNING: Results truncated');
expect(result.returnDisplay).toContain(
'Found 20 matches (truncated from 30+)',
);
});
it('should respect custom limit parameter', async () => {
const params: GrepToolParams = { pattern: 'testword', limit: 5 };
it('should respect custom maxResults parameter', async () => {
const params: GrepToolParams = { pattern: 'testword', maxResults: 5 };
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
// Should find 30 total but limit to 5
expect(result.llmContent).toContain('Found 30 matches');
expect(result.llmContent).toContain('25 lines truncated');
expect(result.returnDisplay).toContain('Found 30 matches (truncated)');
expect(result.llmContent).toContain('Found 5 matches');
expect(result.llmContent).toContain(
'showing first 5 of 30+ total matches',
);
expect(result.llmContent).toContain('current: 5');
expect(result.returnDisplay).toContain(
'Found 5 matches (truncated from 30+)',
);
});
it('should not show truncation warning when all results fit', async () => {
const params: GrepToolParams = { pattern: 'testword', limit: 50 };
const params: GrepToolParams = { pattern: 'testword', maxResults: 50 };
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.llmContent).toContain('Found 30 matches');
expect(result.llmContent).not.toContain('truncated');
expect(result.llmContent).not.toContain('WARNING: Results truncated');
expect(result.llmContent).not.toContain('showing first');
expect(result.returnDisplay).toBe('Found 30 matches');
});
it('should not validate limit parameter', () => {
// limit parameter has no validation constraints in the new implementation
const params = { pattern: 'test', limit: 5 };
const error = grepTool.validateToolParams(params as GrepToolParams);
expect(error).toBeNull();
it('should validate maxResults parameter', () => {
const invalidParams = [
{ pattern: 'test', maxResults: 0 },
{ pattern: 'test', maxResults: 101 },
{ pattern: 'test', maxResults: -1 },
{ pattern: 'test', maxResults: 1.5 },
];
invalidParams.forEach((params) => {
const error = grepTool.validateToolParams(params as GrepToolParams);
expect(error).toBeTruthy(); // Just check that validation fails
expect(error).toMatch(/maxResults|must be/); // Check it's about maxResults validation
});
});
it('should accept valid limit parameter', () => {
it('should accept valid maxResults parameter', () => {
const validParams = [
{ pattern: 'test', limit: 1 },
{ pattern: 'test', limit: 50 },
{ pattern: 'test', limit: 100 },
{ pattern: 'test', maxResults: 1 },
{ pattern: 'test', maxResults: 50 },
{ pattern: 'test', maxResults: 100 },
];
validParams.forEach((params) => {

View File

@@ -4,6 +4,7 @@
* SPDX-License-Identifier: Apache-2.0
*/
import fs from 'node:fs';
import fsPromises from 'node:fs/promises';
import path from 'node:path';
import { EOL } from 'node:os';
@@ -12,15 +13,13 @@ import { globStream } from 'glob';
import type { ToolInvocation, ToolResult } from './tools.js';
import { BaseDeclarativeTool, BaseToolInvocation, Kind } from './tools.js';
import { ToolNames } from './tool-names.js';
import { resolveAndValidatePath } from '../utils/paths.js';
import { makeRelative, shortenPath } from '../utils/paths.js';
import { getErrorMessage, isNodeError } from '../utils/errors.js';
import { isGitRepository } from '../utils/gitUtils.js';
import type { Config } from '../config/config.js';
import type { FileExclusions } from '../utils/ignorePatterns.js';
import { ToolErrorType } from './tool-error.js';
const MAX_LLM_CONTENT_LENGTH = 20_000;
// --- Interfaces ---
/**
@@ -38,14 +37,14 @@ export interface GrepToolParams {
path?: string;
/**
* Glob pattern to filter files (e.g. "*.js", "*.{ts,tsx}")
* File pattern to include in the search (e.g. "*.js", "*.{ts,tsx}")
*/
glob?: string;
include?: string;
/**
* Maximum number of matching lines to return (optional, shows all if not specified)
* Maximum number of matches to return (optional, defaults to 20)
*/
limit?: number;
maxResults?: number;
}
/**
@@ -71,57 +70,121 @@ class GrepToolInvocation extends BaseToolInvocation<
this.fileExclusions = config.getFileExclusions();
}
/**
* Checks if a path is within the root directory and resolves it.
* @param relativePath Path relative to the root directory (or undefined for root).
* @returns The absolute path if valid and exists, or null if no path specified (to search all directories).
* @throws {Error} If path is outside root, doesn't exist, or isn't a directory.
*/
private resolveAndValidatePath(relativePath?: string): string | null {
// If no path specified, return null to indicate searching all workspace directories
if (!relativePath) {
return null;
}
const targetPath = path.resolve(this.config.getTargetDir(), relativePath);
// Security Check: Ensure the resolved path is within workspace boundaries
const workspaceContext = this.config.getWorkspaceContext();
if (!workspaceContext.isPathWithinWorkspace(targetPath)) {
const directories = workspaceContext.getDirectories();
throw new Error(
`Path validation failed: Attempted path "${relativePath}" resolves outside the allowed workspace directories: ${directories.join(', ')}`,
);
}
// Check existence and type after resolving
try {
const stats = fs.statSync(targetPath);
if (!stats.isDirectory()) {
throw new Error(`Path is not a directory: ${targetPath}`);
}
} catch (error: unknown) {
if (isNodeError(error) && error.code !== 'ENOENT') {
throw new Error(`Path does not exist: ${targetPath}`);
}
throw new Error(
`Failed to access path stats for ${targetPath}: ${error}`,
);
}
return targetPath;
}
async execute(signal: AbortSignal): Promise<ToolResult> {
try {
// Default to target directory if no path is provided
const searchDirAbs = resolveAndValidatePath(
this.config,
this.params.path,
);
const workspaceContext = this.config.getWorkspaceContext();
const searchDirAbs = this.resolveAndValidatePath(this.params.path);
const searchDirDisplay = this.params.path || '.';
// Perform grep search
const rawMatches = await this.performGrepSearch({
pattern: this.params.pattern,
path: searchDirAbs,
glob: this.params.glob,
signal,
});
// Determine which directories to search
let searchDirectories: readonly string[];
if (searchDirAbs === null) {
// No path specified - search all workspace directories
searchDirectories = workspaceContext.getDirectories();
} else {
// Specific path provided - search only that directory
searchDirectories = [searchDirAbs];
}
// Build search description
const searchLocationDescription = this.params.path
? `in path "${searchDirDisplay}"`
: `in the workspace directory`;
// Collect matches from all search directories
let allMatches: GrepMatch[] = [];
const maxResults = this.params.maxResults ?? 20; // Default to 20 results
let totalMatchesFound = 0;
let searchTruncated = false;
const filterDescription = this.params.glob
? ` (filter: "${this.params.glob}")`
: '';
for (const searchDir of searchDirectories) {
const matches = await this.performGrepSearch({
pattern: this.params.pattern,
path: searchDir,
include: this.params.include,
signal,
});
// Check if we have any matches
if (rawMatches.length === 0) {
const noMatchMsg = `No matches found for pattern "${this.params.pattern}" ${searchLocationDescription}${filterDescription}.`;
totalMatchesFound += matches.length;
// Add directory prefix if searching multiple directories
if (searchDirectories.length > 1) {
const dirName = path.basename(searchDir);
matches.forEach((match) => {
match.filePath = path.join(dirName, match.filePath);
});
}
// Apply result limiting
const remainingSlots = maxResults - allMatches.length;
if (remainingSlots <= 0) {
searchTruncated = true;
break;
}
if (matches.length > remainingSlots) {
allMatches = allMatches.concat(matches.slice(0, remainingSlots));
searchTruncated = true;
break;
} else {
allMatches = allMatches.concat(matches);
}
}
let searchLocationDescription: string;
if (searchDirAbs === null) {
const numDirs = workspaceContext.getDirectories().length;
searchLocationDescription =
numDirs > 1
? `across ${numDirs} workspace directories`
: `in the workspace directory`;
} else {
searchLocationDescription = `in path "${searchDirDisplay}"`;
}
if (allMatches.length === 0) {
const noMatchMsg = `No matches found for pattern "${this.params.pattern}" ${searchLocationDescription}${this.params.include ? ` (filter: "${this.params.include}")` : ''}.`;
return { llmContent: noMatchMsg, returnDisplay: `No matches found` };
}
// Apply line limit if specified
let truncatedByLineLimit = false;
let matchesToInclude = rawMatches;
if (
this.params.limit !== undefined &&
rawMatches.length > this.params.limit
) {
matchesToInclude = rawMatches.slice(0, this.params.limit);
truncatedByLineLimit = true;
}
const totalMatches = rawMatches.length;
const matchTerm = totalMatches === 1 ? 'match' : 'matches';
// Build header
const header = `Found ${totalMatches} ${matchTerm} for pattern "${this.params.pattern}" ${searchLocationDescription}${filterDescription}:\n---\n`;
// Group matches by file
const matchesByFile = matchesToInclude.reduce(
const matchesByFile = allMatches.reduce(
(acc, match) => {
const fileKey = match.filePath;
if (!acc[fileKey]) {
@@ -134,51 +197,46 @@ class GrepToolInvocation extends BaseToolInvocation<
{} as Record<string, GrepMatch[]>,
);
// Build grep output
let grepOutput = '';
const matchCount = allMatches.length;
const matchTerm = matchCount === 1 ? 'match' : 'matches';
// Build the header with truncation info if needed
let headerText = `Found ${matchCount} ${matchTerm} for pattern "${this.params.pattern}" ${searchLocationDescription}${this.params.include ? ` (filter: "${this.params.include}")` : ''}`;
if (searchTruncated) {
headerText += ` (showing first ${matchCount} of ${totalMatchesFound}+ total matches)`;
}
let llmContent = `${headerText}:
---
`;
for (const filePath in matchesByFile) {
grepOutput += `File: ${filePath}\n`;
llmContent += `File: ${filePath}\n`;
matchesByFile[filePath].forEach((match) => {
const trimmedLine = match.line.trim();
grepOutput += `L${match.lineNumber}: ${trimmedLine}\n`;
llmContent += `L${match.lineNumber}: ${trimmedLine}\n`;
});
grepOutput += '---\n';
llmContent += '---\n';
}
// Apply character limit as safety net
let truncatedByCharLimit = false;
if (grepOutput.length > MAX_LLM_CONTENT_LENGTH) {
grepOutput = grepOutput.slice(0, MAX_LLM_CONTENT_LENGTH) + '...';
truncatedByCharLimit = true;
// Add truncation guidance if results were limited
if (searchTruncated) {
llmContent += `\nWARNING: Results truncated to prevent context overflow. To see more results:
- Use a more specific pattern to reduce matches
- Add file filters with the 'include' parameter (e.g., "*.js", "src/**")
- Specify a narrower 'path' to search in a subdirectory
- Increase 'maxResults' parameter if you need more matches (current: ${maxResults})`;
}
// Count how many lines we actually included after character truncation
const finalLines = grepOutput
.split('\n')
.filter(
(line) =>
line.trim() && !line.startsWith('File:') && !line.startsWith('---'),
);
const includedLines = finalLines.length;
// Build result
let llmContent = header + grepOutput;
// Add truncation notice if needed
if (truncatedByLineLimit || truncatedByCharLimit) {
const omittedMatches = totalMatches - includedLines;
llmContent += ` [${omittedMatches} ${omittedMatches === 1 ? 'line' : 'lines'} truncated] ...`;
}
// Build display message
let displayMessage = `Found ${totalMatches} ${matchTerm}`;
if (truncatedByLineLimit || truncatedByCharLimit) {
displayMessage += ` (truncated)`;
let displayText = `Found ${matchCount} ${matchTerm}`;
if (searchTruncated) {
displayText += ` (truncated from ${totalMatchesFound}+)`;
}
return {
llmContent: llmContent.trim(),
returnDisplay: displayMessage,
returnDisplay: displayText,
};
} catch (error) {
console.error(`Error during GrepLogic execution: ${error}`);
@@ -271,26 +329,50 @@ class GrepToolInvocation extends BaseToolInvocation<
* @returns A string describing the grep
*/
getDescription(): string {
let description = `'${this.params.pattern}' in path '${this.params.path || './'}'`;
if (this.params.glob) {
description += ` (filter: '${this.params.glob}')`;
let description = `'${this.params.pattern}'`;
if (this.params.include) {
description += ` in ${this.params.include}`;
}
if (this.params.path) {
const resolvedPath = path.resolve(
this.config.getTargetDir(),
this.params.path,
);
if (
resolvedPath === this.config.getTargetDir() ||
this.params.path === '.'
) {
description += ` within ./`;
} else {
const relativePath = makeRelative(
resolvedPath,
this.config.getTargetDir(),
);
description += ` within ${shortenPath(relativePath)}`;
}
} else {
// When no path is specified, indicate searching all workspace directories
const workspaceContext = this.config.getWorkspaceContext();
const directories = workspaceContext.getDirectories();
if (directories.length > 1) {
description += ` across all workspace directories`;
}
}
return description;
}
/**
* Performs the actual search using the prioritized strategies.
* @param options Search options including pattern, absolute path, and glob filter.
* @param options Search options including pattern, absolute path, and include glob.
* @returns A promise resolving to an array of match objects.
*/
private async performGrepSearch(options: {
pattern: string;
path: string; // Expects absolute path
glob?: string;
include?: string;
signal: AbortSignal;
}): Promise<GrepMatch[]> {
const { pattern, path: absolutePath, glob } = options;
const { pattern, path: absolutePath, include } = options;
let strategyUsed = 'none';
try {
@@ -308,8 +390,8 @@ class GrepToolInvocation extends BaseToolInvocation<
'--ignore-case',
pattern,
];
if (glob) {
gitArgs.push('--', glob);
if (include) {
gitArgs.push('--', include);
}
try {
@@ -375,8 +457,8 @@ class GrepToolInvocation extends BaseToolInvocation<
})
.filter((dir): dir is string => !!dir);
commonExcludes.forEach((dir) => grepArgs.push(`--exclude-dir=${dir}`));
if (glob) {
grepArgs.push(`--include=${glob}`);
if (include) {
grepArgs.push(`--include=${include}`);
}
grepArgs.push(pattern);
grepArgs.push('.');
@@ -455,7 +537,7 @@ class GrepToolInvocation extends BaseToolInvocation<
'GrepLogic: Falling back to JavaScript grep implementation.',
);
strategyUsed = 'javascript fallback';
const globPattern = glob ? glob : '**/*';
const globPattern = include ? include : '**/*';
const ignorePatterns = this.fileExclusions.getGlobExcludes();
const filesIterator = globStream(globPattern, {
@@ -521,30 +603,32 @@ export class GrepTool extends BaseDeclarativeTool<GrepToolParams, ToolResult> {
constructor(private readonly config: Config) {
super(
GrepTool.Name,
'Grep',
'A powerful search tool for finding patterns in files\n\n Usage:\n - ALWAYS use Grep for search tasks. NEVER invoke `grep` or `rg` as a Bash command. The Grep tool has been optimized for correct permissions and access.\n - Supports full regex syntax (e.g., "log.*Error", "function\\s+\\w+")\n - Filter files with glob parameter (e.g., "*.js", "**/*.tsx")\n - Case-insensitive by default\n - Use Task tool for open-ended searches requiring multiple rounds\n',
'SearchText',
'Searches for a regular expression pattern within the content of files in a specified directory (or current working directory). Can filter files by a glob pattern. Returns the lines containing matches, along with their file paths and line numbers.',
Kind.Search,
{
properties: {
pattern: {
type: 'string',
description:
'The regular expression pattern to search for in file contents',
},
glob: {
"The regular expression (regex) pattern to search for within file contents (e.g., 'function\\s+myFunction', 'import\\s+\\{.*\\}\\s+from\\s+.*').",
type: 'string',
description:
'Glob pattern to filter files (e.g. "*.js", "*.{ts,tsx}")',
},
path: {
description:
'Optional: The absolute path to the directory to search within. If omitted, searches the current working directory.',
type: 'string',
description:
'File or directory to search in. Defaults to current working directory.',
},
limit: {
type: 'number',
include: {
description:
'Limit output to first N matching lines. Optional - shows all matches if not specified.',
"Optional: A glob pattern to filter which files are searched (e.g., '*.js', '*.{ts,tsx}', 'src/**'). If omitted, searches all files (respecting potential global ignores).",
type: 'string',
},
maxResults: {
description:
'Optional: Maximum number of matches to return to prevent context overflow (default: 20, max: 100). Use lower values for broad searches, higher for specific searches.',
type: 'number',
minimum: 1,
maximum: 100,
},
},
required: ['pattern'],
@@ -553,6 +637,47 @@ export class GrepTool extends BaseDeclarativeTool<GrepToolParams, ToolResult> {
);
}
/**
* Checks if a path is within the root directory and resolves it.
* @param relativePath Path relative to the root directory (or undefined for root).
* @returns The absolute path if valid and exists, or null if no path specified (to search all directories).
* @throws {Error} If path is outside root, doesn't exist, or isn't a directory.
*/
private resolveAndValidatePath(relativePath?: string): string | null {
// If no path specified, return null to indicate searching all workspace directories
if (!relativePath) {
return null;
}
const targetPath = path.resolve(this.config.getTargetDir(), relativePath);
// Security Check: Ensure the resolved path is within workspace boundaries
const workspaceContext = this.config.getWorkspaceContext();
if (!workspaceContext.isPathWithinWorkspace(targetPath)) {
const directories = workspaceContext.getDirectories();
throw new Error(
`Path validation failed: Attempted path "${relativePath}" resolves outside the allowed workspace directories: ${directories.join(', ')}`,
);
}
// Check existence and type after resolving
try {
const stats = fs.statSync(targetPath);
if (!stats.isDirectory()) {
throw new Error(`Path is not a directory: ${targetPath}`);
}
} catch (error: unknown) {
if (isNodeError(error) && error.code !== 'ENOENT') {
throw new Error(`Path does not exist: ${targetPath}`);
}
throw new Error(
`Failed to access path stats for ${targetPath}: ${error}`,
);
}
return targetPath;
}
/**
* Validates the parameters for the tool
* @param params Parameters to validate
@@ -561,17 +686,27 @@ export class GrepTool extends BaseDeclarativeTool<GrepToolParams, ToolResult> {
protected override validateToolParamValues(
params: GrepToolParams,
): string | null {
// Validate pattern is a valid regex
try {
new RegExp(params.pattern);
} catch (error) {
return `Invalid regular expression pattern: ${params.pattern}. Error: ${getErrorMessage(error)}`;
return `Invalid regular expression pattern provided: ${params.pattern}. Error: ${getErrorMessage(error)}`;
}
// Validate maxResults if provided
if (params.maxResults !== undefined) {
if (
!Number.isInteger(params.maxResults) ||
params.maxResults < 1 ||
params.maxResults > 100
) {
return `maxResults must be an integer between 1 and 100, got: ${params.maxResults}`;
}
}
// Only validate path if one is provided
if (params.path) {
try {
resolveAndValidatePath(this.config, params.path);
this.resolveAndValidatePath(params.path);
} catch (error) {
return getErrorMessage(error);
}

View File

@@ -23,7 +23,6 @@ import { createMockWorkspaceContext } from '../test-utils/mockWorkspaceContext.j
import type { ChildProcess } from 'node:child_process';
import { spawn } from 'node:child_process';
import { ensureRipgrepPath } from '../utils/ripgrepUtils.js';
import { DEFAULT_FILE_FILTERING_OPTIONS } from '../config/constants.js';
// Mock ripgrepUtils
vi.mock('../utils/ripgrepUtils.js', () => ({
@@ -43,17 +42,11 @@ function createMockSpawn(
outputData?: string;
exitCode?: number;
signal?: string;
onCall?: (
command: string,
args: readonly string[],
spawnOptions?: unknown,
) => void;
} = {},
) {
const { outputData, exitCode = 0, signal, onCall } = options;
const { outputData, exitCode = 0, signal } = options;
return (command: string, args: readonly string[], spawnOptions?: unknown) => {
onCall?.(command, args, spawnOptions);
return () => {
const mockProcess = {
stdout: {
on: vi.fn(),
@@ -94,29 +87,19 @@ function createMockSpawn(
describe('RipGrepTool', () => {
let tempRootDir: string;
let grepTool: RipGrepTool;
let fileExclusionsMock: { getGlobExcludes: () => string[] };
const abortSignal = new AbortController().signal;
const mockConfig = {
getTargetDir: () => tempRootDir,
getWorkspaceContext: () => createMockWorkspaceContext(tempRootDir),
getWorkingDir: () => tempRootDir,
getDebugMode: () => false,
getUseBuiltinRipgrep: () => true,
} as unknown as Config;
beforeEach(async () => {
vi.clearAllMocks();
(ensureRipgrepPath as Mock).mockResolvedValue('/mock/path/to/rg');
mockSpawn.mockReset();
mockSpawn.mockClear();
tempRootDir = await fs.mkdtemp(path.join(os.tmpdir(), 'grep-tool-root-'));
fileExclusionsMock = {
getGlobExcludes: vi.fn().mockReturnValue([]),
};
Object.assign(mockConfig, {
getFileExclusions: () => fileExclusionsMock,
getFileFilteringOptions: () => DEFAULT_FILE_FILTERING_OPTIONS,
});
grepTool = new RipGrepTool(mockConfig);
// Create some test files and directories
@@ -154,11 +137,11 @@ describe('RipGrepTool', () => {
expect(grepTool.validateToolParams(params)).toBeNull();
});
it('should return null for valid params (pattern, path, and glob)', () => {
it('should return null for valid params (pattern, path, and include)', () => {
const params: RipGrepToolParams = {
pattern: 'hello',
path: '.',
glob: '*.txt',
include: '*.txt',
};
expect(grepTool.validateToolParams(params)).toBeNull();
});
@@ -170,11 +153,9 @@ describe('RipGrepTool', () => {
);
});
it('should surface an error for invalid regex pattern', () => {
it('should return null for what would be an invalid regex pattern', () => {
const params: RipGrepToolParams = { pattern: '[[' };
expect(grepTool.validateToolParams(params)).toContain(
'Invalid regular expression pattern: [[',
);
expect(grepTool.validateToolParams(params)).toBeNull();
});
it('should return error if path does not exist', () => {
@@ -184,15 +165,17 @@ describe('RipGrepTool', () => {
};
// Check for the core error message, as the full path might vary
expect(grepTool.validateToolParams(params)).toContain(
'Path does not exist:',
'Failed to access path stats for',
);
expect(grepTool.validateToolParams(params)).toContain('nonexistent');
});
it('should allow path to be a file', () => {
it('should return error if path is a file, not a directory', async () => {
const filePath = path.join(tempRootDir, 'fileA.txt');
const params: RipGrepToolParams = { pattern: 'hello', path: filePath };
expect(grepTool.validateToolParams(params)).toBeNull();
expect(grepTool.validateToolParams(params)).toContain(
`Path is not a directory: ${filePath}`,
);
});
});
@@ -211,11 +194,13 @@ describe('RipGrepTool', () => {
expect(result.llmContent).toContain(
'Found 3 matches for pattern "world" in the workspace directory',
);
expect(result.llmContent).toContain('fileA.txt:1:hello world');
expect(result.llmContent).toContain('fileA.txt:2:second line with world');
expect(result.llmContent).toContain('File: fileA.txt');
expect(result.llmContent).toContain('L1: hello world');
expect(result.llmContent).toContain('L2: second line with world');
expect(result.llmContent).toContain(
'sub/fileC.txt:1:another world in sub dir',
`File: ${path.join('sub', 'fileC.txt')}`,
);
expect(result.llmContent).toContain('L1: another world in sub dir');
expect(result.returnDisplay).toBe('Found 3 matches');
});
@@ -234,33 +219,12 @@ describe('RipGrepTool', () => {
expect(result.llmContent).toContain(
'Found 1 match for pattern "world" in path "sub"',
);
expect(result.llmContent).toContain(
'fileC.txt:1:another world in sub dir',
);
expect(result.llmContent).toContain('File: fileC.txt'); // Path relative to 'sub'
expect(result.llmContent).toContain('L1: another world in sub dir');
expect(result.returnDisplay).toBe('Found 1 match');
});
it('should use target directory when path is not provided', async () => {
mockSpawn.mockImplementationOnce(
createMockSpawn({
outputData: `fileA.txt:1:hello world${EOL}`,
exitCode: 0,
onCall: (_, args) => {
// Should search in the target directory (tempRootDir)
expect(args[args.length - 1]).toBe(tempRootDir);
},
}),
);
const params: RipGrepToolParams = { pattern: 'world' };
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.llmContent).toContain(
'Found 1 match for pattern "world" in the workspace directory',
);
});
it('should find matches with a glob filter', async () => {
it('should find matches with an include glob', async () => {
// Setup specific mock for this test
mockSpawn.mockImplementationOnce(
createMockSpawn({
@@ -269,19 +233,20 @@ describe('RipGrepTool', () => {
}),
);
const params: RipGrepToolParams = { pattern: 'hello', glob: '*.js' };
const params: RipGrepToolParams = { pattern: 'hello', include: '*.js' };
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.llmContent).toContain(
'Found 1 match for pattern "hello" in the workspace directory (filter: "*.js"):',
);
expect(result.llmContent).toContain('File: fileB.js');
expect(result.llmContent).toContain(
'fileB.js:2:function baz() { return "hello"; }',
'L2: function baz() { return "hello"; }',
);
expect(result.returnDisplay).toBe('Found 1 match');
});
it('should find matches with a glob filter and path', async () => {
it('should find matches with an include glob and path', async () => {
await fs.writeFile(
path.join(tempRootDir, 'sub', 'another.js'),
'const greeting = "hello";',
@@ -326,115 +291,18 @@ describe('RipGrepTool', () => {
const params: RipGrepToolParams = {
pattern: 'hello',
path: 'sub',
glob: '*.js',
include: '*.js',
};
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.llmContent).toContain(
'Found 1 match for pattern "hello" in path "sub" (filter: "*.js")',
);
expect(result.llmContent).toContain(
'another.js:1:const greeting = "hello";',
);
expect(result.llmContent).toContain('File: another.js');
expect(result.llmContent).toContain('L1: const greeting = "hello";');
expect(result.returnDisplay).toBe('Found 1 match');
});
it('should pass .qwenignore to ripgrep when respected', async () => {
await fs.writeFile(
path.join(tempRootDir, '.qwenignore'),
'ignored.txt\n',
);
mockSpawn.mockImplementationOnce(
createMockSpawn({
exitCode: 1,
onCall: (_, args) => {
expect(args).toContain('--ignore-file');
expect(args).toContain(path.join(tempRootDir, '.qwenignore'));
},
}),
);
const params: RipGrepToolParams = { pattern: 'secret' };
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.llmContent).toContain(
'No matches found for pattern "secret" in the workspace directory.',
);
expect(result.returnDisplay).toBe('No matches found');
});
it('should include .qwenignore matches when disabled in config', async () => {
await fs.writeFile(path.join(tempRootDir, '.qwenignore'), 'kept.txt\n');
await fs.writeFile(path.join(tempRootDir, 'kept.txt'), 'keep me');
Object.assign(mockConfig, {
getFileFilteringOptions: () => ({
respectGitIgnore: true,
respectQwenIgnore: false,
}),
});
mockSpawn.mockImplementationOnce(
createMockSpawn({
outputData: `kept.txt:1:keep me${EOL}`,
exitCode: 0,
onCall: (_, args) => {
expect(args).not.toContain('--ignore-file');
expect(args).not.toContain(path.join(tempRootDir, '.qwenignore'));
},
}),
);
const params: RipGrepToolParams = { pattern: 'keep' };
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.llmContent).toContain(
'Found 1 match for pattern "keep" in the workspace directory:',
);
expect(result.llmContent).toContain('kept.txt:1:keep me');
expect(result.returnDisplay).toBe('Found 1 match');
});
it('should disable gitignore when configured', async () => {
Object.assign(mockConfig, {
getFileFilteringOptions: () => ({
respectGitIgnore: false,
respectQwenIgnore: true,
}),
});
mockSpawn.mockImplementationOnce(
createMockSpawn({
exitCode: 1,
onCall: (_, args) => {
expect(args).toContain('--no-ignore-vcs');
},
}),
);
const params: RipGrepToolParams = { pattern: 'ignored' };
const invocation = grepTool.build(params);
await invocation.execute(abortSignal);
});
it('should truncate llm content when exceeding maximum length', async () => {
const longMatch = 'fileA.txt:1:' + 'a'.repeat(25_000);
mockSpawn.mockImplementationOnce(
createMockSpawn({
outputData: `${longMatch}${EOL}`,
exitCode: 0,
}),
);
const params: RipGrepToolParams = { pattern: 'a+' };
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
expect(String(result.llmContent).length).toBeLessThanOrEqual(21_000);
expect(result.llmContent).toMatch(/\[\d+ lines? truncated\] \.\.\./);
expect(result.returnDisplay).toContain('truncated');
});
it('should return "No matches found" when pattern does not exist', async () => {
// Setup specific mock for no matches
mockSpawn.mockImplementationOnce(
@@ -452,10 +320,19 @@ describe('RipGrepTool', () => {
expect(result.returnDisplay).toBe('No matches found');
});
it('should throw validation error for invalid regex pattern', async () => {
it('should return an error from ripgrep for invalid regex pattern', async () => {
mockSpawn.mockImplementationOnce(
createMockSpawn({
exitCode: 2,
}),
);
const params: RipGrepToolParams = { pattern: '[[' };
expect(() => grepTool.build(params)).toThrow(
'Invalid regular expression pattern: [[',
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.llmContent).toContain('ripgrep exited with code 2');
expect(result.returnDisplay).toContain(
'Error: ripgrep exited with code 2',
);
});
@@ -502,7 +379,8 @@ describe('RipGrepTool', () => {
expect(result.llmContent).toContain(
'Found 1 match for pattern "foo.*bar" in the workspace directory:',
);
expect(result.llmContent).toContain('fileB.js:1:const foo = "bar";');
expect(result.llmContent).toContain('File: fileB.js');
expect(result.llmContent).toContain('L1: const foo = "bar";');
});
it('should be case-insensitive by default (JS fallback)', async () => {
@@ -552,9 +430,11 @@ describe('RipGrepTool', () => {
expect(result.llmContent).toContain(
'Found 2 matches for pattern "HELLO" in the workspace directory:',
);
expect(result.llmContent).toContain('fileA.txt:1:hello world');
expect(result.llmContent).toContain('File: fileA.txt');
expect(result.llmContent).toContain('L1: hello world');
expect(result.llmContent).toContain('File: fileB.js');
expect(result.llmContent).toContain(
'fileB.js:2:function baz() { return "hello"; }',
'L2: function baz() { return "hello"; }',
);
});
@@ -565,26 +445,6 @@ describe('RipGrepTool', () => {
);
});
it('should search within a single file when path is a file', async () => {
mockSpawn.mockImplementationOnce(
createMockSpawn({
outputData: `fileA.txt:1:hello world${EOL}fileA.txt:2:second line with world${EOL}`,
exitCode: 0,
}),
);
const params: RipGrepToolParams = {
pattern: 'world',
path: path.join(tempRootDir, 'fileA.txt'),
};
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.llmContent).toContain('Found 2 matches');
expect(result.llmContent).toContain('fileA.txt:1:hello world');
expect(result.llmContent).toContain('fileA.txt:2:second line with world');
expect(result.returnDisplay).toBe('Found 2 matches');
});
it('should throw an error if ripgrep is not available', async () => {
// Make ensureRipgrepBinary throw
(ensureRipgrepPath as Mock).mockRejectedValue(
@@ -602,6 +462,191 @@ describe('RipGrepTool', () => {
});
});
describe('multi-directory workspace', () => {
it('should search across all workspace directories when no path is specified', async () => {
// Create additional directory with test files
const secondDir = await fs.mkdtemp(
path.join(os.tmpdir(), 'grep-tool-second-'),
);
await fs.writeFile(
path.join(secondDir, 'other.txt'),
'hello from second directory\nworld in second',
);
await fs.writeFile(
path.join(secondDir, 'another.js'),
'function world() { return "test"; }',
);
// Create a mock config with multiple directories
const multiDirConfig = {
getTargetDir: () => tempRootDir,
getWorkspaceContext: () =>
createMockWorkspaceContext(tempRootDir, [secondDir]),
getDebugMode: () => false,
} as unknown as Config;
// Setup specific mock for this test - multi-directory search for 'world'
// Mock will be called twice - once for each directory
let callCount = 0;
mockSpawn.mockImplementation(() => {
callCount++;
const mockProcess = {
stdout: {
on: vi.fn(),
removeListener: vi.fn(),
},
stderr: {
on: vi.fn(),
removeListener: vi.fn(),
},
on: vi.fn(),
removeListener: vi.fn(),
kill: vi.fn(),
};
setTimeout(() => {
const stdoutDataHandler = mockProcess.stdout.on.mock.calls.find(
(call) => call[0] === 'data',
)?.[1];
const closeHandler = mockProcess.on.mock.calls.find(
(call) => call[0] === 'close',
)?.[1];
let outputData = '';
if (callCount === 1) {
// First directory (tempRootDir)
outputData =
[
'fileA.txt:1:hello world',
'fileA.txt:2:second line with world',
'sub/fileC.txt:1:another world in sub dir',
].join(EOL) + EOL;
} else if (callCount === 2) {
// Second directory (secondDir)
outputData =
[
'other.txt:2:world in second',
'another.js:1:function world() { return "test"; }',
].join(EOL) + EOL;
}
if (stdoutDataHandler && outputData) {
stdoutDataHandler(Buffer.from(outputData));
}
if (closeHandler) {
closeHandler(0);
}
}, 0);
return mockProcess as unknown as ChildProcess;
});
const multiDirGrepTool = new RipGrepTool(multiDirConfig);
const params: RipGrepToolParams = { pattern: 'world' };
const invocation = multiDirGrepTool.build(params);
const result = await invocation.execute(abortSignal);
// Should find matches in both directories
expect(result.llmContent).toContain(
'Found 5 matches for pattern "world"',
);
// Matches from first directory
expect(result.llmContent).toContain('fileA.txt');
expect(result.llmContent).toContain('L1: hello world');
expect(result.llmContent).toContain('L2: second line with world');
expect(result.llmContent).toContain('fileC.txt');
expect(result.llmContent).toContain('L1: another world in sub dir');
// Matches from both directories
expect(result.llmContent).toContain('other.txt');
expect(result.llmContent).toContain('L2: world in second');
expect(result.llmContent).toContain('another.js');
expect(result.llmContent).toContain('L1: function world()');
// Clean up
await fs.rm(secondDir, { recursive: true, force: true });
mockSpawn.mockClear();
});
it('should search only specified path within workspace directories', async () => {
// Create additional directory
const secondDir = await fs.mkdtemp(
path.join(os.tmpdir(), 'grep-tool-second-'),
);
await fs.mkdir(path.join(secondDir, 'sub'));
await fs.writeFile(
path.join(secondDir, 'sub', 'test.txt'),
'hello from second sub directory',
);
// Create a mock config with multiple directories
const multiDirConfig = {
getTargetDir: () => tempRootDir,
getWorkspaceContext: () =>
createMockWorkspaceContext(tempRootDir, [secondDir]),
getDebugMode: () => false,
} as unknown as Config;
// Setup specific mock for this test - searching in 'sub' should only return matches from that directory
mockSpawn.mockImplementationOnce(() => {
const mockProcess = {
stdout: {
on: vi.fn(),
removeListener: vi.fn(),
},
stderr: {
on: vi.fn(),
removeListener: vi.fn(),
},
on: vi.fn(),
removeListener: vi.fn(),
kill: vi.fn(),
};
setTimeout(() => {
const onData = mockProcess.stdout.on.mock.calls.find(
(call) => call[0] === 'data',
)?.[1];
const onClose = mockProcess.on.mock.calls.find(
(call) => call[0] === 'close',
)?.[1];
if (onData) {
onData(Buffer.from(`fileC.txt:1:another world in sub dir${EOL}`));
}
if (onClose) {
onClose(0);
}
}, 0);
return mockProcess as unknown as ChildProcess;
});
const multiDirGrepTool = new RipGrepTool(multiDirConfig);
// Search only in the 'sub' directory of the first workspace
const params: RipGrepToolParams = { pattern: 'world', path: 'sub' };
const invocation = multiDirGrepTool.build(params);
const result = await invocation.execute(abortSignal);
// Should only find matches in the specified sub directory
expect(result.llmContent).toContain(
'Found 1 match for pattern "world" in path "sub"',
);
expect(result.llmContent).toContain('File: fileC.txt');
expect(result.llmContent).toContain('L1: another world in sub dir');
// Should not contain matches from second directory
expect(result.llmContent).not.toContain('test.txt');
// Clean up
await fs.rm(secondDir, { recursive: true, force: true });
});
});
describe('abort signal handling', () => {
it('should handle AbortSignal during search', async () => {
const controller = new AbortController();
@@ -666,9 +711,7 @@ describe('RipGrepTool', () => {
describe('error handling and edge cases', () => {
it('should handle workspace boundary violations', () => {
const params: RipGrepToolParams = { pattern: 'test', path: '../outside' };
expect(() => grepTool.build(params)).toThrow(
/Path is not within workspace/,
);
expect(() => grepTool.build(params)).toThrow(/Path validation failed/);
});
it('should handle empty directories gracefully', async () => {
@@ -1019,8 +1062,8 @@ describe('RipGrepTool', () => {
});
});
describe('glob pattern filtering', () => {
it('should handle multiple file extensions in glob pattern', async () => {
describe('include pattern filtering', () => {
it('should handle multiple file extensions in include pattern', async () => {
await fs.writeFile(
path.join(tempRootDir, 'test.ts'),
'typescript content',
@@ -1032,7 +1075,7 @@ describe('RipGrepTool', () => {
);
await fs.writeFile(path.join(tempRootDir, 'test.txt'), 'text content');
// Setup specific mock for this test - glob pattern should filter to only ts/tsx files
// Setup specific mock for this test - include pattern should filter to only ts/tsx files
mockSpawn.mockImplementationOnce(() => {
const mockProcess = {
stdout: {
@@ -1073,7 +1116,7 @@ describe('RipGrepTool', () => {
const params: RipGrepToolParams = {
pattern: 'content',
glob: '*.{ts,tsx}',
include: '*.{ts,tsx}',
};
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
@@ -1084,7 +1127,7 @@ describe('RipGrepTool', () => {
expect(result.llmContent).not.toContain('test.txt');
});
it('should handle directory patterns in glob', async () => {
it('should handle directory patterns in include', async () => {
await fs.mkdir(path.join(tempRootDir, 'src'), { recursive: true });
await fs.writeFile(
path.join(tempRootDir, 'src', 'main.ts'),
@@ -1092,7 +1135,7 @@ describe('RipGrepTool', () => {
);
await fs.writeFile(path.join(tempRootDir, 'other.ts'), 'other code');
// Setup specific mock for this test - glob pattern should filter to only src/** files
// Setup specific mock for this test - include pattern should filter to only src/** files
mockSpawn.mockImplementationOnce(() => {
const mockProcess = {
stdout: {
@@ -1129,7 +1172,7 @@ describe('RipGrepTool', () => {
const params: RipGrepToolParams = {
pattern: 'code',
glob: 'src/**',
include: 'src/**',
};
const invocation = grepTool.build(params);
const result = await invocation.execute(abortSignal);
@@ -1146,15 +1189,13 @@ describe('RipGrepTool', () => {
expect(invocation.getDescription()).toBe("'testPattern'");
});
it('should generate correct description with pattern and glob', () => {
it('should generate correct description with pattern and include', () => {
const params: RipGrepToolParams = {
pattern: 'testPattern',
glob: '*.ts',
include: '*.ts',
};
const invocation = grepTool.build(params);
expect(invocation.getDescription()).toBe(
"'testPattern' (filter: '*.ts')",
);
expect(invocation.getDescription()).toBe("'testPattern' in *.ts");
});
it('should generate correct description with pattern and path', async () => {
@@ -1165,37 +1206,47 @@ describe('RipGrepTool', () => {
path: path.join('src', 'app'),
};
const invocation = grepTool.build(params);
expect(invocation.getDescription()).toContain(
"'testPattern' in path 'src",
);
expect(invocation.getDescription()).toContain("app'");
// The path will be relative to the tempRootDir, so we check for containment.
expect(invocation.getDescription()).toContain("'testPattern' within");
expect(invocation.getDescription()).toContain(path.join('src', 'app'));
});
it('should generate correct description with default search path', () => {
it('should indicate searching across all workspace directories when no path specified', () => {
// Create a mock config with multiple directories
const multiDirConfig = {
getTargetDir: () => tempRootDir,
getWorkspaceContext: () =>
createMockWorkspaceContext(tempRootDir, ['/another/dir']),
getDebugMode: () => false,
} as unknown as Config;
const multiDirGrepTool = new RipGrepTool(multiDirConfig);
const params: RipGrepToolParams = { pattern: 'testPattern' };
const invocation = grepTool.build(params);
expect(invocation.getDescription()).toBe("'testPattern'");
const invocation = multiDirGrepTool.build(params);
expect(invocation.getDescription()).toBe(
"'testPattern' across all workspace directories",
);
});
it('should generate correct description with pattern, glob, and path', async () => {
it('should generate correct description with pattern, include, and path', async () => {
const dirPath = path.join(tempRootDir, 'src', 'app');
await fs.mkdir(dirPath, { recursive: true });
const params: RipGrepToolParams = {
pattern: 'testPattern',
glob: '*.ts',
include: '*.ts',
path: path.join('src', 'app'),
};
const invocation = grepTool.build(params);
expect(invocation.getDescription()).toContain(
"'testPattern' in path 'src",
"'testPattern' in *.ts within",
);
expect(invocation.getDescription()).toContain("(filter: '*.ts')");
expect(invocation.getDescription()).toContain(path.join('src', 'app'));
});
it('should use path when specified in description', () => {
it('should use ./ for root path in description', () => {
const params: RipGrepToolParams = { pattern: 'testPattern', path: '.' };
const invocation = grepTool.build(params);
expect(invocation.getDescription()).toBe("'testPattern' in path '.'");
expect(invocation.getDescription()).toBe("'testPattern' within ./");
});
});
});

View File

@@ -10,19 +10,16 @@ import { EOL } from 'node:os';
import { spawn } from 'node:child_process';
import type { ToolInvocation, ToolResult } from './tools.js';
import { BaseDeclarativeTool, BaseToolInvocation, Kind } from './tools.js';
import { ToolNames } from './tool-names.js';
import { resolveAndValidatePath } from '../utils/paths.js';
import { getErrorMessage } from '../utils/errors.js';
import { SchemaValidator } from '../utils/schemaValidator.js';
import { makeRelative, shortenPath } from '../utils/paths.js';
import { getErrorMessage, isNodeError } from '../utils/errors.js';
import type { Config } from '../config/config.js';
import { ensureRipgrepPath } from '../utils/ripgrepUtils.js';
import { SchemaValidator } from '../utils/schemaValidator.js';
import type { FileFilteringOptions } from '../config/constants.js';
import { DEFAULT_FILE_FILTERING_OPTIONS } from '../config/constants.js';
const MAX_LLM_CONTENT_LENGTH = 20_000;
const DEFAULT_TOTAL_MAX_MATCHES = 20000;
/**
* Parameters for the GrepTool (Simplified)
* Parameters for the GrepTool
*/
export interface RipGrepToolParams {
/**
@@ -36,14 +33,18 @@ export interface RipGrepToolParams {
path?: string;
/**
* Glob pattern to filter files (e.g. "*.js", "*.{ts,tsx}")
* File pattern to include in the search (e.g. "*.js", "*.{ts,tsx}")
*/
glob?: string;
include?: string;
}
/**
* Maximum number of matching lines to return (optional, shows all if not specified)
*/
limit?: number;
/**
* Result object for a single grep match
*/
interface GrepMatch {
filePath: string;
lineNumber: number;
line: string;
}
class GrepToolInvocation extends BaseToolInvocation<
@@ -57,97 +58,147 @@ class GrepToolInvocation extends BaseToolInvocation<
super(params);
}
/**
* Checks if a path is within the root directory and resolves it.
* @param relativePath Path relative to the root directory (or undefined for root).
* @returns The absolute path if valid and exists, or null if no path specified (to search all directories).
* @throws {Error} If path is outside root, doesn't exist, or isn't a directory.
*/
private resolveAndValidatePath(relativePath?: string): string | null {
// If no path specified, return null to indicate searching all workspace directories
if (!relativePath) {
return null;
}
const targetPath = path.resolve(this.config.getTargetDir(), relativePath);
// Security Check: Ensure the resolved path is within workspace boundaries
const workspaceContext = this.config.getWorkspaceContext();
if (!workspaceContext.isPathWithinWorkspace(targetPath)) {
const directories = workspaceContext.getDirectories();
throw new Error(
`Path validation failed: Attempted path "${relativePath}" resolves outside the allowed workspace directories: ${directories.join(', ')}`,
);
}
// Check existence and type after resolving
try {
const stats = fs.statSync(targetPath);
if (!stats.isDirectory()) {
throw new Error(`Path is not a directory: ${targetPath}`);
}
} catch (error: unknown) {
if (isNodeError(error) && error.code !== 'ENOENT') {
throw new Error(`Path does not exist: ${targetPath}`);
}
throw new Error(
`Failed to access path stats for ${targetPath}: ${error}`,
);
}
return targetPath;
}
async execute(signal: AbortSignal): Promise<ToolResult> {
try {
const searchDirAbs = resolveAndValidatePath(
this.config,
this.params.path,
{ allowFiles: true },
);
const workspaceContext = this.config.getWorkspaceContext();
const searchDirAbs = this.resolveAndValidatePath(this.params.path);
const searchDirDisplay = this.params.path || '.';
// Get raw ripgrep output
const rawOutput = await this.performRipgrepSearch({
pattern: this.params.pattern,
path: searchDirAbs,
glob: this.params.glob,
signal,
});
// Build search description
const searchLocationDescription = this.params.path
? `in path "${searchDirDisplay}"`
: `in the workspace directory`;
const filterDescription = this.params.glob
? ` (filter: "${this.params.glob}")`
: '';
// Check if we have any matches
if (!rawOutput.trim()) {
const noMatchMsg = `No matches found for pattern "${this.params.pattern}" ${searchLocationDescription}${filterDescription}.`;
return { llmContent: noMatchMsg, returnDisplay: `No matches found` };
// Determine which directories to search
let searchDirectories: readonly string[];
if (searchDirAbs === null) {
// No path specified - search all workspace directories
searchDirectories = workspaceContext.getDirectories();
} else {
// Specific path provided - search only that directory
searchDirectories = [searchDirAbs];
}
// Split into lines and count total matches
const allLines = rawOutput.split(EOL).filter((line) => line.trim());
const totalMatches = allLines.length;
const matchTerm = totalMatches === 1 ? 'match' : 'matches';
let allMatches: GrepMatch[] = [];
const totalMaxMatches = DEFAULT_TOTAL_MAX_MATCHES;
// Build header early to calculate available space
const header = `Found ${totalMatches} ${matchTerm} for pattern "${this.params.pattern}" ${searchLocationDescription}${filterDescription}:\n---\n`;
// Apply line limit first (if specified)
let truncatedByLineLimit = false;
let linesToInclude = allLines;
if (
this.params.limit !== undefined &&
allLines.length > this.params.limit
) {
linesToInclude = allLines.slice(0, this.params.limit);
truncatedByLineLimit = true;
if (this.config.getDebugMode()) {
console.log(`[GrepTool] Total result limit: ${totalMaxMatches}`);
}
// Build output and track how many lines we include, respecting character limit
const parts: string[] = [];
let includedLines = 0;
let truncatedByCharLimit = false;
let currentLength = 0;
for (const searchDir of searchDirectories) {
const searchResult = await this.performRipgrepSearch({
pattern: this.params.pattern,
path: searchDir,
include: this.params.include,
signal,
});
for (const line of linesToInclude) {
const sep = includedLines > 0 ? 1 : 0;
if (searchDirectories.length > 1) {
const dirName = path.basename(searchDir);
searchResult.forEach((match) => {
match.filePath = path.join(dirName, match.filePath);
});
}
includedLines++;
allMatches = allMatches.concat(searchResult);
if (currentLength + line.length <= MAX_LLM_CONTENT_LENGTH) {
parts.push(line);
currentLength = currentLength + line.length + sep;
} else {
const remaining = Math.max(
MAX_LLM_CONTENT_LENGTH - currentLength - sep,
10,
);
parts.push(line.slice(0, remaining) + '...');
truncatedByCharLimit = true;
if (allMatches.length >= totalMaxMatches) {
allMatches = allMatches.slice(0, totalMaxMatches);
break;
}
}
const grepOutput = parts.join('\n');
// Build result
let llmContent = header + grepOutput;
// Add truncation notice if needed
if (truncatedByLineLimit || truncatedByCharLimit) {
const omittedMatches = totalMatches - includedLines;
llmContent += `\n---\n[${omittedMatches} ${omittedMatches === 1 ? 'line' : 'lines'} truncated] ...`;
let searchLocationDescription: string;
if (searchDirAbs === null) {
const numDirs = workspaceContext.getDirectories().length;
searchLocationDescription =
numDirs > 1
? `across ${numDirs} workspace directories`
: `in the workspace directory`;
} else {
searchLocationDescription = `in path "${searchDirDisplay}"`;
}
// Build display message (show real count, not truncated)
let displayMessage = `Found ${totalMatches} ${matchTerm}`;
if (truncatedByLineLimit || truncatedByCharLimit) {
displayMessage += ` (truncated)`;
if (allMatches.length === 0) {
const noMatchMsg = `No matches found for pattern "${this.params.pattern}" ${searchLocationDescription}${this.params.include ? ` (filter: "${this.params.include}")` : ''}.`;
return { llmContent: noMatchMsg, returnDisplay: `No matches found` };
}
const wasTruncated = allMatches.length >= totalMaxMatches;
const matchesByFile = allMatches.reduce(
(acc, match) => {
const fileKey = match.filePath;
if (!acc[fileKey]) {
acc[fileKey] = [];
}
acc[fileKey].push(match);
acc[fileKey].sort((a, b) => a.lineNumber - b.lineNumber);
return acc;
},
{} as Record<string, GrepMatch[]>,
);
const matchCount = allMatches.length;
const matchTerm = matchCount === 1 ? 'match' : 'matches';
let llmContent = `Found ${matchCount} ${matchTerm} for pattern "${this.params.pattern}" ${searchLocationDescription}${this.params.include ? ` (filter: "${this.params.include}")` : ''}`;
if (wasTruncated) {
llmContent += ` (results limited to ${totalMaxMatches} matches for performance)`;
}
llmContent += `:\n---\n`;
for (const filePath in matchesByFile) {
llmContent += `File: ${filePath}\n`;
matchesByFile[filePath].forEach((match) => {
const trimmedLine = match.line.trim();
llmContent += `L${match.lineNumber}: ${trimmedLine}\n`;
});
llmContent += '---\n';
}
let displayMessage = `Found ${matchCount} ${matchTerm}`;
if (wasTruncated) {
displayMessage += ` (limited)`;
}
return {
@@ -164,15 +215,53 @@ class GrepToolInvocation extends BaseToolInvocation<
}
}
private parseRipgrepOutput(output: string, basePath: string): GrepMatch[] {
const results: GrepMatch[] = [];
if (!output) return results;
const lines = output.split(EOL);
for (const line of lines) {
if (!line.trim()) continue;
const firstColonIndex = line.indexOf(':');
if (firstColonIndex === -1) continue;
const secondColonIndex = line.indexOf(':', firstColonIndex + 1);
if (secondColonIndex === -1) continue;
const filePathRaw = line.substring(0, firstColonIndex);
const lineNumberStr = line.substring(
firstColonIndex + 1,
secondColonIndex,
);
const lineContent = line.substring(secondColonIndex + 1);
const lineNumber = parseInt(lineNumberStr, 10);
if (!isNaN(lineNumber)) {
const absoluteFilePath = path.resolve(basePath, filePathRaw);
const relativeFilePath = path.relative(basePath, absoluteFilePath);
results.push({
filePath: relativeFilePath || path.basename(absoluteFilePath),
lineNumber,
line: lineContent,
});
}
}
return results;
}
private async performRipgrepSearch(options: {
pattern: string;
path: string; // Can be a file or directory
glob?: string;
path: string;
include?: string;
signal: AbortSignal;
}): Promise<string> {
const { pattern, path: absolutePath, glob } = options;
}): Promise<GrepMatch[]> {
const { pattern, path: absolutePath, include } = options;
const rgArgs: string[] = [
const rgArgs = [
'--line-number',
'--no-heading',
'--with-filename',
@@ -181,34 +270,29 @@ class GrepToolInvocation extends BaseToolInvocation<
pattern,
];
// Add file exclusions from .gitignore and .qwenignore
const filteringOptions = this.getFileFilteringOptions();
if (!filteringOptions.respectGitIgnore) {
rgArgs.push('--no-ignore-vcs');
if (include) {
rgArgs.push('--glob', include);
}
if (filteringOptions.respectQwenIgnore) {
const qwenIgnorePath = path.join(
this.config.getTargetDir(),
'.qwenignore',
);
if (fs.existsSync(qwenIgnorePath)) {
rgArgs.push('--ignore-file', qwenIgnorePath);
}
}
// Add glob pattern if provided
if (glob) {
rgArgs.push('--glob', glob);
}
const excludes = [
'.git',
'node_modules',
'bower_components',
'*.log',
'*.tmp',
'build',
'dist',
'coverage',
];
excludes.forEach((exclude) => {
rgArgs.push('--glob', `!${exclude}`);
});
rgArgs.push('--threads', '4');
rgArgs.push(absolutePath);
try {
const rgPath = this.config.getUseBuiltinRipgrep()
? await ensureRipgrepPath()
: 'rg';
const rgPath = await ensureRipgrepPath();
const output = await new Promise<string>((resolve, reject) => {
const child = spawn(rgPath, rgArgs, {
windowsHide: true,
@@ -250,78 +334,83 @@ class GrepToolInvocation extends BaseToolInvocation<
});
});
return output;
return this.parseRipgrepOutput(output, absolutePath);
} catch (error: unknown) {
console.error(`GrepLogic: ripgrep failed: ${getErrorMessage(error)}`);
throw error;
}
}
private getFileFilteringOptions(): FileFilteringOptions {
const options = this.config.getFileFilteringOptions?.();
return {
respectGitIgnore:
options?.respectGitIgnore ??
DEFAULT_FILE_FILTERING_OPTIONS.respectGitIgnore,
respectQwenIgnore:
options?.respectQwenIgnore ??
DEFAULT_FILE_FILTERING_OPTIONS.respectQwenIgnore,
};
}
/**
* Gets a description of the grep operation
* @param params Parameters for the grep operation
* @returns A string describing the grep
*/
getDescription(): string {
let description = `'${this.params.pattern}'`;
if (this.params.include) {
description += ` in ${this.params.include}`;
}
if (this.params.path) {
description += ` in path '${this.params.path}'`;
const resolvedPath = path.resolve(
this.config.getTargetDir(),
this.params.path,
);
if (
resolvedPath === this.config.getTargetDir() ||
this.params.path === '.'
) {
description += ` within ./`;
} else {
const relativePath = makeRelative(
resolvedPath,
this.config.getTargetDir(),
);
description += ` within ${shortenPath(relativePath)}`;
}
} else {
// When no path is specified, indicate searching all workspace directories
const workspaceContext = this.config.getWorkspaceContext();
const directories = workspaceContext.getDirectories();
if (directories.length > 1) {
description += ` across all workspace directories`;
}
}
if (this.params.glob) {
description += ` (filter: '${this.params.glob}')`;
}
return description;
}
}
/**
* Implementation of the Grep tool logic
* Implementation of the Grep tool logic (moved from CLI)
*/
export class RipGrepTool extends BaseDeclarativeTool<
RipGrepToolParams,
ToolResult
> {
static readonly Name = ToolNames.GREP;
static readonly Name = 'search_file_content';
constructor(private readonly config: Config) {
super(
RipGrepTool.Name,
'Grep',
'A powerful search tool built on ripgrep\n\n Usage:\n - ALWAYS use Grep for search tasks. NEVER invoke `grep` or `rg` as a Bash command. The Grep tool has been optimized for correct permissions and access.\n - Supports full regex syntax (e.g., "log.*Error", "function\\s+\\w+")\n - Filter files with glob parameter (e.g., "*.js", "**/*.tsx")\n - Use Task tool for open-ended searches requiring multiple rounds\n - Pattern syntax: Uses ripgrep (not grep) - special regex characters need escaping (use `interface\\{\\}` to find `interface{}` in Go code)\n',
'SearchText',
'Searches for a regular expression pattern within the content of files in a specified directory (or current working directory). Can filter files by a glob pattern. Returns the lines containing matches, along with their file paths and line numbers. Total results limited to 20,000 matches like VSCode.',
Kind.Search,
{
properties: {
pattern: {
type: 'string',
description:
'The regular expression pattern to search for in file contents',
},
glob: {
"The regular expression (regex) pattern to search for within file contents (e.g., 'function\\s+myFunction', 'import\\s+\\{.*\\}\\s+from\\s+.*').",
type: 'string',
description:
'Glob pattern to filter files (e.g. "*.js", "*.{ts,tsx}") - maps to rg --glob',
},
path: {
description:
'Optional: The absolute path to the directory to search within. If omitted, searches the current working directory.',
type: 'string',
description:
'File or directory to search in (rg PATH). Defaults to current working directory.',
},
limit: {
type: 'number',
include: {
description:
'Limit output to first N lines/entries. Optional - shows all matches if not specified.',
"Optional: A glob pattern to filter which files are searched (e.g., '*.js', '*.{ts,tsx}', 'src/**'). If omitted, searches all files (respecting potential global ignores).",
type: 'string',
},
},
required: ['pattern'],
@@ -330,14 +419,53 @@ export class RipGrepTool extends BaseDeclarativeTool<
);
}
/**
* Checks if a path is within the root directory and resolves it.
* @param relativePath Path relative to the root directory (or undefined for root).
* @returns The absolute path if valid and exists, or null if no path specified (to search all directories).
* @throws {Error} If path is outside root, doesn't exist, or isn't a directory.
*/
private resolveAndValidatePath(relativePath?: string): string | null {
// If no path specified, return null to indicate searching all workspace directories
if (!relativePath) {
return null;
}
const targetPath = path.resolve(this.config.getTargetDir(), relativePath);
// Security Check: Ensure the resolved path is within workspace boundaries
const workspaceContext = this.config.getWorkspaceContext();
if (!workspaceContext.isPathWithinWorkspace(targetPath)) {
const directories = workspaceContext.getDirectories();
throw new Error(
`Path validation failed: Attempted path "${relativePath}" resolves outside the allowed workspace directories: ${directories.join(', ')}`,
);
}
// Check existence and type after resolving
try {
const stats = fs.statSync(targetPath);
if (!stats.isDirectory()) {
throw new Error(`Path is not a directory: ${targetPath}`);
}
} catch (error: unknown) {
if (isNodeError(error) && error.code !== 'ENOENT') {
throw new Error(`Path does not exist: ${targetPath}`);
}
throw new Error(
`Failed to access path stats for ${targetPath}: ${error}`,
);
}
return targetPath;
}
/**
* Validates the parameters for the tool
* @param params Parameters to validate
* @returns An error message string if invalid, null otherwise
*/
protected override validateToolParamValues(
params: RipGrepToolParams,
): string | null {
override validateToolParams(params: RipGrepToolParams): string | null {
const errors = SchemaValidator.validate(
this.schema.parametersJsonSchema,
params,
@@ -346,17 +474,10 @@ export class RipGrepTool extends BaseDeclarativeTool<
return errors;
}
// Validate pattern is a valid regex
try {
new RegExp(params.pattern);
} catch (error) {
return `Invalid regular expression pattern: ${params.pattern}. Error: ${getErrorMessage(error)}`;
}
// Only validate path if one is provided
if (params.path) {
try {
resolveAndValidatePath(this.config, params.path, { allowFiles: true });
this.resolveAndValidatePath(params.path);
} catch (error) {
return getErrorMessage(error);
}

View File

@@ -14,7 +14,7 @@ export const ToolNames = {
WRITE_FILE: 'write_file',
READ_FILE: 'read_file',
READ_MANY_FILES: 'read_many_files',
GREP: 'grep_search',
GREP: 'search_file_content',
GLOB: 'glob',
SHELL: 'run_shell_command',
TODO_WRITE: 'todo_write',

View File

@@ -0,0 +1,166 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { WebSearchTool, type WebSearchToolParams } from './web-search.js';
import type { Config } from '../config/config.js';
import { GeminiClient } from '../core/client.js';
// Mock GeminiClient and Config constructor
vi.mock('../core/client.js');
vi.mock('../config/config.js');
// Mock global fetch
const mockFetch = vi.fn();
global.fetch = mockFetch;
describe('WebSearchTool', () => {
const abortSignal = new AbortController().signal;
let mockGeminiClient: GeminiClient;
let tool: WebSearchTool;
beforeEach(() => {
vi.clearAllMocks();
const mockConfigInstance = {
getGeminiClient: () => mockGeminiClient,
getProxy: () => undefined,
getTavilyApiKey: () => 'test-api-key', // Add the missing method
} as unknown as Config;
mockGeminiClient = new GeminiClient(mockConfigInstance);
tool = new WebSearchTool(mockConfigInstance);
});
afterEach(() => {
vi.restoreAllMocks();
});
describe('build', () => {
it('should return an invocation for a valid query', () => {
const params: WebSearchToolParams = { query: 'test query' };
const invocation = tool.build(params);
expect(invocation).toBeDefined();
expect(invocation.params).toEqual(params);
});
it('should throw an error for an empty query', () => {
const params: WebSearchToolParams = { query: '' };
expect(() => tool.build(params)).toThrow(
"The 'query' parameter cannot be empty.",
);
});
it('should throw an error for a query with only whitespace', () => {
const params: WebSearchToolParams = { query: ' ' };
expect(() => tool.build(params)).toThrow(
"The 'query' parameter cannot be empty.",
);
});
});
describe('getDescription', () => {
it('should return a description of the search', () => {
const params: WebSearchToolParams = { query: 'test query' };
const invocation = tool.build(params);
expect(invocation.getDescription()).toBe(
'Searching the web for: "test query"',
);
});
});
describe('execute', () => {
it('should return search results for a successful query', async () => {
const params: WebSearchToolParams = { query: 'successful query' };
// Mock the fetch response
mockFetch.mockResolvedValueOnce({
ok: true,
json: async () => ({
answer: 'Here are your results.',
results: [],
}),
} as Response);
const invocation = tool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.llmContent).toBe(
'Web search results for "successful query":\n\nHere are your results.',
);
expect(result.returnDisplay).toBe(
'Search results for "successful query" returned.',
);
expect(result.sources).toEqual([]);
});
it('should handle no search results found', async () => {
const params: WebSearchToolParams = { query: 'no results query' };
// Mock the fetch response
mockFetch.mockResolvedValueOnce({
ok: true,
json: async () => ({
answer: '',
results: [],
}),
} as Response);
const invocation = tool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.llmContent).toBe(
'No search results or information found for query: "no results query"',
);
expect(result.returnDisplay).toBe('No information found.');
});
it('should handle API errors gracefully', async () => {
const params: WebSearchToolParams = { query: 'error query' };
// Mock the fetch to reject
mockFetch.mockRejectedValueOnce(new Error('API Failure'));
const invocation = tool.build(params);
const result = await invocation.execute(abortSignal);
expect(result.llmContent).toContain('Error:');
expect(result.llmContent).toContain('API Failure');
expect(result.returnDisplay).toBe('Error performing web search.');
});
it('should correctly format results with sources', async () => {
const params: WebSearchToolParams = { query: 'grounding query' };
// Mock the fetch response
mockFetch.mockResolvedValueOnce({
ok: true,
json: async () => ({
answer: 'This is a test response.',
results: [
{ title: 'Example Site', url: 'https://example.com' },
{ title: 'Google', url: 'https://google.com' },
],
}),
} as Response);
const invocation = tool.build(params);
const result = await invocation.execute(abortSignal);
const expectedLlmContent = `Web search results for "grounding query":
This is a test response.
Sources:
[1] Example Site (https://example.com)
[2] Google (https://google.com)`;
expect(result.llmContent).toBe(expectedLlmContent);
expect(result.returnDisplay).toBe(
'Search results for "grounding query" returned.',
);
expect(result.sources).toHaveLength(2);
});
});
});

View File

@@ -0,0 +1,218 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import {
BaseDeclarativeTool,
BaseToolInvocation,
Kind,
type ToolInvocation,
type ToolResult,
type ToolCallConfirmationDetails,
type ToolInfoConfirmationDetails,
ToolConfirmationOutcome,
} from './tools.js';
import type { Config } from '../config/config.js';
import { ApprovalMode } from '../config/config.js';
import { getErrorMessage } from '../utils/errors.js';
interface TavilyResultItem {
title: string;
url: string;
content?: string;
score?: number;
published_date?: string;
}
interface TavilySearchResponse {
query: string;
answer?: string;
results: TavilyResultItem[];
}
/**
* Parameters for the WebSearchTool.
*/
export interface WebSearchToolParams {
/**
* The search query.
*/
query: string;
}
/**
* Extends ToolResult to include sources for web search.
*/
export interface WebSearchToolResult extends ToolResult {
sources?: Array<{ title: string; url: string }>;
}
class WebSearchToolInvocation extends BaseToolInvocation<
WebSearchToolParams,
WebSearchToolResult
> {
constructor(
private readonly config: Config,
params: WebSearchToolParams,
) {
super(params);
}
override getDescription(): string {
return `Searching the web for: "${this.params.query}"`;
}
override async shouldConfirmExecute(
_abortSignal: AbortSignal,
): Promise<ToolCallConfirmationDetails | false> {
if (this.config.getApprovalMode() === ApprovalMode.AUTO_EDIT) {
return false;
}
const confirmationDetails: ToolInfoConfirmationDetails = {
type: 'info',
title: 'Confirm Web Search',
prompt: `Search the web for: "${this.params.query}"`,
onConfirm: async (outcome: ToolConfirmationOutcome) => {
if (outcome === ToolConfirmationOutcome.ProceedAlways) {
this.config.setApprovalMode(ApprovalMode.AUTO_EDIT);
}
},
};
return confirmationDetails;
}
async execute(signal: AbortSignal): Promise<WebSearchToolResult> {
const apiKey = this.config.getTavilyApiKey();
if (!apiKey) {
return {
llmContent:
'Web search is disabled because TAVILY_API_KEY is not configured. Please set it in your settings.json, .env file, or via --tavily-api-key command line argument to enable web search.',
returnDisplay:
'Web search disabled. Configure TAVILY_API_KEY to enable Tavily search.',
};
}
try {
const response = await fetch('https://api.tavily.com/search', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
api_key: apiKey,
query: this.params.query,
search_depth: 'advanced',
max_results: 5,
include_answer: true,
}),
signal,
});
if (!response.ok) {
const text = await response.text().catch(() => '');
throw new Error(
`Tavily API error: ${response.status} ${response.statusText}${text ? ` - ${text}` : ''}`,
);
}
const data = (await response.json()) as TavilySearchResponse;
const sources = (data.results || []).map((r) => ({
title: r.title,
url: r.url,
}));
const sourceListFormatted = sources.map(
(s, i) => `[${i + 1}] ${s.title || 'Untitled'} (${s.url})`,
);
let content = data.answer?.trim() || '';
if (!content) {
// Fallback: build a concise summary from top results
content = sources
.slice(0, 3)
.map((s, i) => `${i + 1}. ${s.title} - ${s.url}`)
.join('\n');
}
if (sourceListFormatted.length > 0) {
content += `\n\nSources:\n${sourceListFormatted.join('\n')}`;
}
if (!content.trim()) {
return {
llmContent: `No search results or information found for query: "${this.params.query}"`,
returnDisplay: 'No information found.',
};
}
return {
llmContent: `Web search results for "${this.params.query}":\n\n${content}`,
returnDisplay: `Search results for "${this.params.query}" returned.`,
sources,
};
} catch (error: unknown) {
const errorMessage = `Error during web search for query "${this.params.query}": ${getErrorMessage(
error,
)}`;
console.error(errorMessage, error);
return {
llmContent: `Error: ${errorMessage}`,
returnDisplay: `Error performing web search.`,
};
}
}
}
/**
* A tool to perform web searches using Tavily Search via the Tavily API.
*/
export class WebSearchTool extends BaseDeclarativeTool<
WebSearchToolParams,
WebSearchToolResult
> {
static readonly Name: string = 'web_search';
constructor(private readonly config: Config) {
super(
WebSearchTool.Name,
'WebSearch',
'Performs a web search using the Tavily API and returns a concise answer with sources. Requires the TAVILY_API_KEY environment variable.',
Kind.Search,
{
type: 'object',
properties: {
query: {
type: 'string',
description: 'The search query to find information on the web.',
},
},
required: ['query'],
},
);
}
/**
* Validates the parameters for the WebSearchTool.
* @param params The parameters to validate
* @returns An error message string if validation fails, null if valid
*/
protected override validateToolParamValues(
params: WebSearchToolParams,
): string | null {
if (!params.query || params.query.trim() === '') {
return "The 'query' parameter cannot be empty.";
}
return null;
}
protected createInvocation(
params: WebSearchToolParams,
): ToolInvocation<WebSearchToolParams, WebSearchToolResult> {
return new WebSearchToolInvocation(this.config, params);
}
}

View File

@@ -1,58 +0,0 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
import type { WebSearchProvider, WebSearchResult } from './types.js';
/**
* Base implementation for web search providers.
* Provides common functionality for error handling.
*/
export abstract class BaseWebSearchProvider implements WebSearchProvider {
abstract readonly name: string;
/**
* Check if the provider is available (has required configuration).
*/
abstract isAvailable(): boolean;
/**
* Perform the actual search implementation.
* @param query The search query
* @param signal Abort signal for cancellation
* @returns Promise resolving to search results
*/
protected abstract performSearch(
query: string,
signal: AbortSignal,
): Promise<WebSearchResult>;
/**
* Execute a web search with error handling.
* @param query The search query
* @param signal Abort signal for cancellation
* @returns Promise resolving to search results
*/
async search(query: string, signal: AbortSignal): Promise<WebSearchResult> {
if (!this.isAvailable()) {
throw new Error(
`[${this.name}] Provider is not available. Please check your configuration.`,
);
}
try {
return await this.performSearch(query, signal);
} catch (error: unknown) {
if (
error instanceof Error &&
error.message.startsWith(`[${this.name}]`)
) {
throw error;
}
const message = error instanceof Error ? error.message : String(error);
throw new Error(`[${this.name}] Search failed: ${message}`);
}
}
}

View File

@@ -1,312 +0,0 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { WebSearchTool } from './index.js';
import type { Config } from '../../config/config.js';
import type { WebSearchConfig } from './types.js';
import { ApprovalMode } from '../../config/config.js';
describe('WebSearchTool', () => {
let mockConfig: Config;
beforeEach(() => {
vi.resetAllMocks();
mockConfig = {
getApprovalMode: vi.fn(() => ApprovalMode.AUTO_EDIT),
setApprovalMode: vi.fn(),
getWebSearchConfig: vi.fn(),
} as unknown as Config;
});
describe('formatSearchResults', () => {
it('should use answer when available and append sources', async () => {
const webSearchConfig: WebSearchConfig = {
provider: [
{
type: 'tavily',
apiKey: 'test-key',
},
],
default: 'tavily',
};
(
mockConfig.getWebSearchConfig as ReturnType<typeof vi.fn>
).mockReturnValue(webSearchConfig);
// Mock fetch to return search results with answer
global.fetch = vi.fn().mockResolvedValue({
ok: true,
json: async () => ({
query: 'test query',
answer: 'This is a concise answer from the search provider.',
results: [
{
title: 'Result 1',
url: 'https://example.com/1',
content: 'Content 1',
},
{
title: 'Result 2',
url: 'https://example.com/2',
content: 'Content 2',
},
],
}),
});
const tool = new WebSearchTool(mockConfig);
const invocation = tool.build({ query: 'test query' });
const result = await invocation.execute(new AbortController().signal);
expect(result.llmContent).toContain(
'This is a concise answer from the search provider.',
);
expect(result.llmContent).toContain('Sources:');
expect(result.llmContent).toContain(
'[1] Result 1 (https://example.com/1)',
);
expect(result.llmContent).toContain(
'[2] Result 2 (https://example.com/2)',
);
});
it('should build informative summary when answer is not available', async () => {
const webSearchConfig: WebSearchConfig = {
provider: [
{
type: 'google',
apiKey: 'test-key',
searchEngineId: 'test-engine',
},
],
default: 'google',
};
(
mockConfig.getWebSearchConfig as ReturnType<typeof vi.fn>
).mockReturnValue(webSearchConfig);
// Mock fetch to return search results without answer
global.fetch = vi.fn().mockResolvedValue({
ok: true,
json: async () => ({
items: [
{
title: 'Google Result 1',
link: 'https://example.com/1',
snippet: 'This is a helpful snippet from the first result.',
},
{
title: 'Google Result 2',
link: 'https://example.com/2',
snippet: 'This is a helpful snippet from the second result.',
},
],
}),
});
const tool = new WebSearchTool(mockConfig);
const invocation = tool.build({ query: 'test query' });
const result = await invocation.execute(new AbortController().signal);
// Should contain formatted results with title, snippet, and source
expect(result.llmContent).toContain('1. **Google Result 1**');
expect(result.llmContent).toContain(
'This is a helpful snippet from the first result.',
);
expect(result.llmContent).toContain('Source: https://example.com/1');
expect(result.llmContent).toContain('2. **Google Result 2**');
expect(result.llmContent).toContain(
'This is a helpful snippet from the second result.',
);
expect(result.llmContent).toContain('Source: https://example.com/2');
// Should include web_fetch hint
expect(result.llmContent).toContain('web_fetch tool');
});
it('should include optional fields when available', async () => {
const webSearchConfig: WebSearchConfig = {
provider: [
{
type: 'tavily',
apiKey: 'test-key',
},
],
default: 'tavily',
};
(
mockConfig.getWebSearchConfig as ReturnType<typeof vi.fn>
).mockReturnValue(webSearchConfig);
// Mock fetch to return results with score and publishedDate
global.fetch = vi.fn().mockResolvedValue({
ok: true,
json: async () => ({
query: 'test query',
results: [
{
title: 'Result with metadata',
url: 'https://example.com',
content: 'Content with metadata',
score: 0.95,
published_date: '2024-01-15',
},
],
}),
});
const tool = new WebSearchTool(mockConfig);
const invocation = tool.build({ query: 'test query' });
const result = await invocation.execute(new AbortController().signal);
// Should include relevance score
expect(result.llmContent).toContain('Relevance: 95%');
// Should include published date
expect(result.llmContent).toContain('Published: 2024-01-15');
});
it('should handle empty results gracefully', async () => {
const webSearchConfig: WebSearchConfig = {
provider: [
{
type: 'google',
apiKey: 'test-key',
searchEngineId: 'test-engine',
},
],
default: 'google',
};
(
mockConfig.getWebSearchConfig as ReturnType<typeof vi.fn>
).mockReturnValue(webSearchConfig);
// Mock fetch to return empty results
global.fetch = vi.fn().mockResolvedValue({
ok: true,
json: async () => ({
items: [],
}),
});
const tool = new WebSearchTool(mockConfig);
const invocation = tool.build({ query: 'test query' });
const result = await invocation.execute(new AbortController().signal);
expect(result.llmContent).toContain('No search results found');
});
it('should limit to top 5 results in fallback mode', async () => {
const webSearchConfig: WebSearchConfig = {
provider: [
{
type: 'google',
apiKey: 'test-key',
searchEngineId: 'test-engine',
},
],
default: 'google',
};
(
mockConfig.getWebSearchConfig as ReturnType<typeof vi.fn>
).mockReturnValue(webSearchConfig);
// Mock fetch to return 10 results
const items = Array.from({ length: 10 }, (_, i) => ({
title: `Result ${i + 1}`,
link: `https://example.com/${i + 1}`,
snippet: `Snippet ${i + 1}`,
}));
global.fetch = vi.fn().mockResolvedValue({
ok: true,
json: async () => ({ items }),
});
const tool = new WebSearchTool(mockConfig);
const invocation = tool.build({ query: 'test query' });
const result = await invocation.execute(new AbortController().signal);
// Should only contain first 5 results
expect(result.llmContent).toContain('1. **Result 1**');
expect(result.llmContent).toContain('5. **Result 5**');
expect(result.llmContent).not.toContain('6. **Result 6**');
expect(result.llmContent).not.toContain('10. **Result 10**');
});
});
describe('validation', () => {
it('should throw validation error when query is empty', () => {
const tool = new WebSearchTool(mockConfig);
expect(() => tool.build({ query: '' })).toThrow(
"The 'query' parameter cannot be empty",
);
});
it('should throw validation error when provider is empty string', () => {
const tool = new WebSearchTool(mockConfig);
expect(() => tool.build({ query: 'test', provider: '' })).toThrow(
"The 'provider' parameter cannot be empty",
);
});
});
describe('configuration', () => {
it('should return error when web search is not configured', async () => {
(
mockConfig.getWebSearchConfig as ReturnType<typeof vi.fn>
).mockReturnValue(null);
const tool = new WebSearchTool(mockConfig);
const invocation = tool.build({ query: 'test query' });
const result = await invocation.execute(new AbortController().signal);
expect(result.error?.message).toContain('Web search is disabled');
expect(result.llmContent).toContain('Web search is disabled');
});
it('should return descriptive message in getDescription when web search is not configured', () => {
(
mockConfig.getWebSearchConfig as ReturnType<typeof vi.fn>
).mockReturnValue(null);
const tool = new WebSearchTool(mockConfig);
const invocation = tool.build({ query: 'test query' });
const description = invocation.getDescription();
expect(description).toBe(
' (Web search is disabled - configure a provider in settings.json)',
);
});
it('should return provider name in getDescription when web search is configured', () => {
const webSearchConfig: WebSearchConfig = {
provider: [
{
type: 'tavily',
apiKey: 'test-key',
},
],
default: 'tavily',
};
(
mockConfig.getWebSearchConfig as ReturnType<typeof vi.fn>
).mockReturnValue(webSearchConfig);
const tool = new WebSearchTool(mockConfig);
const invocation = tool.build({ query: 'test query' });
const description = invocation.getDescription();
expect(description).toBe(' (Searching the web via tavily)');
});
});
});

View File

@@ -1,336 +0,0 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
import {
BaseDeclarativeTool,
BaseToolInvocation,
Kind,
type ToolInvocation,
type ToolCallConfirmationDetails,
type ToolInfoConfirmationDetails,
ToolConfirmationOutcome,
} from '../tools.js';
import { ToolErrorType } from '../tool-error.js';
import type { Config } from '../../config/config.js';
import { ApprovalMode } from '../../config/config.js';
import { getErrorMessage } from '../../utils/errors.js';
import { buildContentWithSources } from './utils.js';
import { TavilyProvider } from './providers/tavily-provider.js';
import { GoogleProvider } from './providers/google-provider.js';
import { DashScopeProvider } from './providers/dashscope-provider.js';
import type {
WebSearchToolParams,
WebSearchToolResult,
WebSearchProvider,
WebSearchResultItem,
WebSearchProviderConfig,
DashScopeProviderConfig,
} from './types.js';
class WebSearchToolInvocation extends BaseToolInvocation<
WebSearchToolParams,
WebSearchToolResult
> {
constructor(
private readonly config: Config,
params: WebSearchToolParams,
) {
super(params);
}
override getDescription(): string {
const webSearchConfig = this.config.getWebSearchConfig();
if (!webSearchConfig) {
return ' (Web search is disabled - configure a provider in settings.json)';
}
const provider = this.params.provider || webSearchConfig.default;
return ` (Searching the web via ${provider})`;
}
override async shouldConfirmExecute(
_abortSignal: AbortSignal,
): Promise<ToolCallConfirmationDetails | false> {
if (this.config.getApprovalMode() === ApprovalMode.AUTO_EDIT) {
return false;
}
const confirmationDetails: ToolInfoConfirmationDetails = {
type: 'info',
title: 'Confirm Web Search',
prompt: `Search the web for: "${this.params.query}"`,
onConfirm: async (outcome: ToolConfirmationOutcome) => {
if (outcome === ToolConfirmationOutcome.ProceedAlways) {
this.config.setApprovalMode(ApprovalMode.AUTO_EDIT);
}
},
};
return confirmationDetails;
}
/**
* Create a provider instance from configuration.
*/
private createProvider(config: WebSearchProviderConfig): WebSearchProvider {
switch (config.type) {
case 'tavily':
return new TavilyProvider(config);
case 'google':
return new GoogleProvider(config);
case 'dashscope': {
// Pass auth type to DashScope provider for availability check
const authType = this.config.getAuthType();
const dashscopeConfig: DashScopeProviderConfig = {
...config,
authType: authType as string | undefined,
};
return new DashScopeProvider(dashscopeConfig);
}
default:
throw new Error('Unknown provider type');
}
}
/**
* Create all configured providers.
*/
private createProviders(
configs: WebSearchProviderConfig[],
): Map<string, WebSearchProvider> {
const providers = new Map<string, WebSearchProvider>();
for (const config of configs) {
try {
const provider = this.createProvider(config);
if (provider.isAvailable()) {
providers.set(config.type, provider);
}
} catch (error) {
console.warn(`Failed to create ${config.type} provider:`, error);
}
}
return providers;
}
/**
* Select the appropriate provider based on configuration and parameters.
* Throws error if provider not found.
*/
private selectProvider(
providers: Map<string, WebSearchProvider>,
requestedProvider?: string,
defaultProvider?: string,
): WebSearchProvider {
// Use requested provider if specified
if (requestedProvider) {
const provider = providers.get(requestedProvider);
if (!provider) {
const available = Array.from(providers.keys()).join(', ');
throw new Error(
`The specified provider "${requestedProvider}" is not available. Available: ${available}`,
);
}
return provider;
}
// Use default provider if specified and available
if (defaultProvider && providers.has(defaultProvider)) {
return providers.get(defaultProvider)!;
}
// Fallback to first available provider
const firstProvider = providers.values().next().value;
if (!firstProvider) {
throw new Error('No web search providers are available.');
}
return firstProvider;
}
/**
* Format search results into a content string.
*/
private formatSearchResults(searchResult: {
answer?: string;
results: WebSearchResultItem[];
}): {
content: string;
sources: Array<{ title: string; url: string }>;
} {
const sources = searchResult.results.map((r) => ({
title: r.title,
url: r.url,
}));
let content = searchResult.answer?.trim() || '';
if (!content) {
// Fallback: Build an informative summary with title + snippet + source link
// This provides enough context for the LLM while keeping token usage efficient
content = searchResult.results
.slice(0, 5) // Top 5 results
.map((r, i) => {
const parts = [`${i + 1}. **${r.title}**`];
// Include snippet/content if available
if (r.content?.trim()) {
parts.push(` ${r.content.trim()}`);
}
// Always include the source URL
parts.push(` Source: ${r.url}`);
// Optionally include relevance score if available
if (r.score !== undefined) {
parts.push(` Relevance: ${(r.score * 100).toFixed(0)}%`);
}
// Optionally include publish date if available
if (r.publishedDate) {
parts.push(` Published: ${r.publishedDate}`);
}
return parts.join('\n');
})
.join('\n\n');
// Add a note about using web_fetch for detailed content
if (content) {
content +=
'\n\n*Note: For detailed content from any source above, use the web_fetch tool with the URL.*';
}
} else {
// When answer is available, append sources section
content = buildContentWithSources(content, sources);
}
return { content, sources };
}
async execute(signal: AbortSignal): Promise<WebSearchToolResult> {
// Check if web search is configured
const webSearchConfig = this.config.getWebSearchConfig();
if (!webSearchConfig) {
return {
llmContent:
'Web search is disabled. Please configure a web search provider in your settings.',
returnDisplay: 'Web search is disabled.',
error: {
message: 'Web search is disabled',
type: ToolErrorType.EXECUTION_FAILED,
},
};
}
try {
// Create and select provider
const providers = this.createProviders(webSearchConfig.provider);
const provider = this.selectProvider(
providers,
this.params.provider,
webSearchConfig.default,
);
// Perform search
const searchResult = await provider.search(this.params.query, signal);
const { content, sources } = this.formatSearchResults(searchResult);
// Guard: Check if we got results
if (!content.trim()) {
return {
llmContent: `No search results found for query: "${this.params.query}" (via ${provider.name})`,
returnDisplay: `No information found for "${this.params.query}".`,
};
}
// Success result
return {
llmContent: `Web search results for "${this.params.query}" (via ${provider.name}):\n\n${content}`,
returnDisplay: `Search results for "${this.params.query}".`,
sources,
};
} catch (error: unknown) {
const errorMessage = `Error during web search: ${getErrorMessage(error)}`;
console.error(errorMessage, error);
return {
llmContent: errorMessage,
returnDisplay: 'Error performing web search.',
error: {
message: errorMessage,
type: ToolErrorType.EXECUTION_FAILED,
},
};
}
}
}
/**
* A tool to perform web searches using configurable providers.
*/
export class WebSearchTool extends BaseDeclarativeTool<
WebSearchToolParams,
WebSearchToolResult
> {
static readonly Name: string = 'web_search';
constructor(private readonly config: Config) {
super(
WebSearchTool.Name,
'WebSearch',
'Allows searching the web and using results to inform responses. Provides up-to-date information for current events and recent data beyond the training data cutoff. Returns search results formatted with concise answers and source links. Use this tool when accessing information that may be outdated or beyond the knowledge cutoff.',
Kind.Search,
{
type: 'object',
properties: {
query: {
type: 'string',
description: 'The search query to find information on the web.',
},
provider: {
type: 'string',
description:
'Optional provider to use for the search (e.g., "tavily", "google", "dashscope"). IMPORTANT: Only specify this parameter if you explicitly know which provider to use. Otherwise, omit this parameter entirely and let the system automatically select the appropriate provider based on availability and configuration. The system will choose the best available provider automatically.',
},
},
required: ['query'],
},
);
}
/**
* Validates the parameters for the WebSearchTool.
* @param params The parameters to validate
* @returns An error message string if validation fails, null if valid
*/
protected override validateToolParamValues(
params: WebSearchToolParams,
): string | null {
if (!params.query || params.query.trim() === '') {
return "The 'query' parameter cannot be empty.";
}
// Validate provider parameter if provided
if (params.provider !== undefined && params.provider.trim() === '') {
return "The 'provider' parameter cannot be empty if specified.";
}
return null;
}
protected createInvocation(
params: WebSearchToolParams,
): ToolInvocation<WebSearchToolParams, WebSearchToolResult> {
return new WebSearchToolInvocation(this.config, params);
}
}
// Re-export types for external use
export type {
WebSearchToolParams,
WebSearchToolResult,
WebSearchConfig,
WebSearchProviderConfig,
} from './types.js';

View File

@@ -1,199 +0,0 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
import { promises as fs } from 'node:fs';
import * as os from 'os';
import * as path from 'path';
import { BaseWebSearchProvider } from '../base-provider.js';
import type {
WebSearchResult,
WebSearchResultItem,
DashScopeProviderConfig,
} from '../types.js';
import type { QwenCredentials } from '../../../qwen/qwenOAuth2.js';
interface DashScopeSearchItem {
_id: string;
snippet: string;
title: string;
url: string;
timestamp: number;
timestamp_format: string;
hostname: string;
hostlogo?: string;
web_main_body?: string;
_score?: number;
}
interface DashScopeSearchResponse {
headers: Record<string, unknown>;
rid: string;
status: number;
message: string | null;
data: {
total: number;
totalDistinct: number;
docs: DashScopeSearchItem[];
keywords?: string[];
qpInfos?: Array<{
query: string;
cleanQuery: string;
sensitive: boolean;
spellchecked: string;
spellcheck: boolean;
tokenized: string[];
stopWords: string[];
synonymWords: string[];
recognitions: unknown[];
rewrite: string;
operator: string;
}>;
aggs?: unknown;
extras?: Record<string, unknown>;
};
debug?: unknown;
success: boolean;
}
// File System Configuration
const QWEN_DIR = '.qwen';
const QWEN_CREDENTIAL_FILENAME = 'oauth_creds.json';
/**
* Get the path to the cached OAuth credentials file.
*/
function getQwenCachedCredentialPath(): string {
return path.join(os.homedir(), QWEN_DIR, QWEN_CREDENTIAL_FILENAME);
}
/**
* Load cached Qwen OAuth credentials from disk.
*/
async function loadQwenCredentials(): Promise<QwenCredentials | null> {
try {
const keyFile = getQwenCachedCredentialPath();
const creds = await fs.readFile(keyFile, 'utf-8');
return JSON.parse(creds) as QwenCredentials;
} catch {
return null;
}
}
/**
* Web search provider using Alibaba Cloud DashScope API.
*/
export class DashScopeProvider extends BaseWebSearchProvider {
readonly name = 'DashScope';
constructor(private readonly config: DashScopeProviderConfig) {
super();
}
isAvailable(): boolean {
// DashScope provider is only available when auth type is QWEN_OAUTH
// This ensures it's only used when OAuth credentials are available
return this.config.authType === 'qwen-oauth';
}
/**
* Get the access token and API endpoint for authentication and web search.
* Tries OAuth credentials first, falls back to apiKey if OAuth is not available.
* Returns both token and endpoint to avoid loading credentials multiple times.
*/
private async getAuthConfig(): Promise<{
accessToken: string | null;
apiEndpoint: string;
}> {
// Load credentials once
const credentials = await loadQwenCredentials();
// Get access token: try OAuth credentials first, fallback to apiKey
let accessToken: string | null = null;
if (credentials?.access_token) {
// Check if token is not expired
if (credentials.expiry_date && credentials.expiry_date > Date.now()) {
accessToken = credentials.access_token;
}
}
if (!accessToken) {
accessToken = this.config.apiKey || null;
}
// Get API endpoint: use resource_url from credentials
if (!credentials?.resource_url) {
throw new Error(
'No resource_url found in credentials. Please authenticate using OAuth',
);
}
// Normalize the URL: add protocol if missing
const baseUrl = credentials.resource_url.startsWith('http')
? credentials.resource_url
: `https://${credentials.resource_url}`;
// Remove trailing slash if present
const normalizedBaseUrl = baseUrl.replace(/\/$/, '');
const apiEndpoint = `${normalizedBaseUrl}/api/v1/indices/plugin/web_search`;
return { accessToken, apiEndpoint };
}
protected async performSearch(
query: string,
signal: AbortSignal,
): Promise<WebSearchResult> {
// Get access token and API endpoint (loads credentials once)
const { accessToken, apiEndpoint } = await this.getAuthConfig();
if (!accessToken) {
throw new Error(
'No access token available. Please authenticate using OAuth',
);
}
const requestBody = {
uq: query,
page: 1,
rows: this.config.maxResults || 10,
};
const response = await fetch(apiEndpoint, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${accessToken}`,
},
body: JSON.stringify(requestBody),
signal,
});
if (!response.ok) {
const text = await response.text().catch(() => '');
throw new Error(
`API error: ${response.status} ${response.statusText}${text ? ` - ${text}` : ''}`,
);
}
const data = (await response.json()) as DashScopeSearchResponse;
if (data.status !== 0) {
throw new Error(`API error: ${data.message || 'Unknown error'}`);
}
const results: WebSearchResultItem[] = (data.data?.docs || []).map(
(item) => ({
title: item.title,
url: item.url,
content: item.snippet,
score: item._score,
publishedDate: item.timestamp_format,
}),
);
return {
query,
results,
};
}
}

View File

@@ -1,91 +0,0 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
import { BaseWebSearchProvider } from '../base-provider.js';
import type {
WebSearchResult,
WebSearchResultItem,
GoogleProviderConfig,
} from '../types.js';
interface GoogleSearchItem {
title: string;
link: string;
snippet?: string;
displayLink?: string;
formattedUrl?: string;
}
interface GoogleSearchResponse {
items?: GoogleSearchItem[];
searchInformation?: {
totalResults?: string;
searchTime?: number;
};
}
/**
* Web search provider using Google Custom Search API.
*/
export class GoogleProvider extends BaseWebSearchProvider {
readonly name = 'Google';
constructor(private readonly config: GoogleProviderConfig) {
super();
}
isAvailable(): boolean {
return !!(this.config.apiKey && this.config.searchEngineId);
}
protected async performSearch(
query: string,
signal: AbortSignal,
): Promise<WebSearchResult> {
const params = new URLSearchParams({
key: this.config.apiKey!,
cx: this.config.searchEngineId!,
q: query,
num: String(this.config.maxResults || 10),
safe: this.config.safeSearch || 'medium',
});
if (this.config.language) {
params.append('lr', `lang_${this.config.language}`);
}
if (this.config.country) {
params.append('cr', `country${this.config.country}`);
}
const url = `https://www.googleapis.com/customsearch/v1?${params.toString()}`;
const response = await fetch(url, {
method: 'GET',
signal,
});
if (!response.ok) {
const text = await response.text().catch(() => '');
throw new Error(
`API error: ${response.status} ${response.statusText}${text ? ` - ${text}` : ''}`,
);
}
const data = (await response.json()) as GoogleSearchResponse;
const results: WebSearchResultItem[] = (data.items || []).map((item) => ({
title: item.title,
url: item.link,
content: item.snippet,
}));
return {
query,
results,
};
}
}

View File

@@ -1,84 +0,0 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
import { BaseWebSearchProvider } from '../base-provider.js';
import type {
WebSearchResult,
WebSearchResultItem,
TavilyProviderConfig,
} from '../types.js';
interface TavilyResultItem {
title: string;
url: string;
content?: string;
score?: number;
published_date?: string;
}
interface TavilySearchResponse {
query: string;
answer?: string;
results: TavilyResultItem[];
}
/**
* Web search provider using Tavily API.
*/
export class TavilyProvider extends BaseWebSearchProvider {
readonly name = 'Tavily';
constructor(private readonly config: TavilyProviderConfig) {
super();
}
isAvailable(): boolean {
return !!this.config.apiKey;
}
protected async performSearch(
query: string,
signal: AbortSignal,
): Promise<WebSearchResult> {
const response = await fetch('https://api.tavily.com/search', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
api_key: this.config.apiKey,
query,
search_depth: this.config.searchDepth || 'advanced',
max_results: this.config.maxResults || 5,
include_answer: this.config.includeAnswer !== false,
}),
signal,
});
if (!response.ok) {
const text = await response.text().catch(() => '');
throw new Error(
`API error: ${response.status} ${response.statusText}${text ? ` - ${text}` : ''}`,
);
}
const data = (await response.json()) as TavilySearchResponse;
const results: WebSearchResultItem[] = (data.results || []).map((r) => ({
title: r.title,
url: r.url,
content: r.content,
score: r.score,
publishedDate: r.published_date,
}));
return {
query,
answer: data.answer?.trim(),
results,
};
}
}

View File

@@ -1,156 +0,0 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
import type { ToolResult } from '../tools.js';
/**
* Common interface for all web search providers.
*/
export interface WebSearchProvider {
/**
* The name of the provider.
*/
readonly name: string;
/**
* Whether the provider is available (has required configuration).
*/
isAvailable(): boolean;
/**
* Perform a web search with the given query.
* @param query The search query
* @param signal Abort signal for cancellation
* @returns Promise resolving to search results
*/
search(query: string, signal: AbortSignal): Promise<WebSearchResult>;
}
/**
* Result item from a web search.
*/
export interface WebSearchResultItem {
title: string;
url: string;
content?: string;
score?: number;
publishedDate?: string;
}
/**
* Result from a web search operation.
*/
export interface WebSearchResult {
/**
* The search query that was executed.
*/
query: string;
/**
* A concise answer if available from the provider.
*/
answer?: string;
/**
* List of search result items.
*/
results: WebSearchResultItem[];
/**
* Provider-specific metadata.
*/
metadata?: Record<string, unknown>;
}
/**
* Extended tool result that includes sources for web search.
*/
export interface WebSearchToolResult extends ToolResult {
sources?: Array<{ title: string; url: string }>;
}
/**
* Parameters for the WebSearchTool.
*/
export interface WebSearchToolParams {
/**
* The search query.
*/
query: string;
/**
* Optional provider to use for the search.
* If not specified, the default provider will be used.
*/
provider?: string;
}
/**
* Configuration for web search providers.
*/
export interface WebSearchConfig {
/**
* List of available providers with their configurations.
*/
provider: WebSearchProviderConfig[];
/**
* The default provider to use.
*/
default: string;
}
/**
* Base configuration for Tavily provider.
*/
export interface TavilyProviderConfig {
type: 'tavily';
apiKey?: string;
searchDepth?: 'basic' | 'advanced';
maxResults?: number;
includeAnswer?: boolean;
}
/**
* Base configuration for Google provider.
*/
export interface GoogleProviderConfig {
type: 'google';
apiKey?: string;
searchEngineId?: string;
maxResults?: number;
safeSearch?: 'off' | 'medium' | 'high';
language?: string;
country?: string;
}
/**
* Base configuration for DashScope provider.
*/
export interface DashScopeProviderConfig {
type: 'dashscope';
apiKey?: string;
uid?: string;
appId?: string;
maxResults?: number;
scene?: string;
timeout?: number;
/**
* Optional auth type to determine provider availability.
* If set to 'qwen-oauth', the provider will be available.
* If set to other values or undefined, the provider will check auth type dynamically.
*/
authType?: string;
}
/**
* Discriminated union type for web search provider configurations.
* This ensures type safety when working with different provider configs.
*/
export type WebSearchProviderConfig =
| TavilyProviderConfig
| GoogleProviderConfig
| DashScopeProviderConfig;

View File

@@ -1,42 +0,0 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
/**
* Utility functions for web search formatting and processing.
*/
/**
* Build content string with appended sources section.
* @param content Main content text
* @param sources Array of source objects
* @returns Combined content with sources
*/
export function buildContentWithSources(
content: string,
sources: Array<{ title: string; url: string }>,
): string {
if (!sources.length) return content;
const sourceList = sources
.map((s, i) => `[${i + 1}] ${s.title || 'Untitled'} (${s.url})`)
.join('\n');
return `${content}\n\nSources:\n${sourceList}`;
}
/**
* Build a concise summary from top search results.
* @param sources Array of source objects
* @param maxResults Maximum number of results to include
* @returns Concise summary string
*/
export function buildSummary(
sources: Array<{ title: string; url: string }>,
maxResults: number = 3,
): string {
return sources
.slice(0, maxResults)
.map((s, i) => `${i + 1}. ${s.title} - ${s.url}`)
.join('\n');
}

View File

@@ -4,7 +4,7 @@
* SPDX-License-Identifier: Apache-2.0
*/
import type { Content, Part } from '@google/genai';
import type { Part } from '@google/genai';
import type { Config } from '../config/config.js';
import { getFolderStructure } from './getFolderStructure.js';
@@ -107,23 +107,3 @@ ${directoryContext}
return initialParts;
}
export async function getInitialChatHistory(
config: Config,
extraHistory?: Content[],
): Promise<Content[]> {
const envParts = await getEnvironmentContext(config);
const envContextString = envParts.map((part) => part.text || '').join('\n\n');
return [
{
role: 'user',
parts: [{ text: envContextString }],
},
{
role: 'model',
parts: [{ text: 'Got it. Thanks for the context!' }],
},
...(extraHistory ?? []),
];
}

View File

@@ -1,381 +0,0 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import * as path from 'node:path';
import * as os from 'os';
import { promises as fs } from 'node:fs';
import { OpenAILogger } from './openaiLogger.js';
describe('OpenAILogger', () => {
let originalCwd: string;
let testTempDir: string;
const createdDirs: string[] = [];
beforeEach(() => {
originalCwd = process.cwd();
testTempDir = path.join(os.tmpdir(), `openai-logger-test-${Date.now()}`);
createdDirs.length = 0; // Clear array
});
afterEach(async () => {
// Clean up all created directories
const cleanupPromises = [
testTempDir,
...createdDirs,
path.resolve(process.cwd(), 'relative-logs'),
path.resolve(process.cwd(), 'custom-logs'),
path.resolve(process.cwd(), 'test-relative-logs'),
path.join(os.homedir(), 'custom-logs'),
path.join(os.homedir(), 'test-openai-logs'),
].map(async (dir) => {
try {
await fs.rm(dir, { recursive: true, force: true });
} catch {
// Ignore cleanup errors
}
});
await Promise.all(cleanupPromises);
process.chdir(originalCwd);
});
describe('constructor', () => {
it('should use default directory when no custom directory is provided', () => {
const logger = new OpenAILogger();
// We can't directly access private logDir, but we can verify behavior
expect(logger).toBeInstanceOf(OpenAILogger);
});
it('should accept absolute path as custom directory', () => {
const customDir = '/absolute/path/to/logs';
const logger = new OpenAILogger(customDir);
expect(logger).toBeInstanceOf(OpenAILogger);
});
it('should resolve relative path to absolute path', async () => {
const relativeDir = 'custom-logs';
const logger = new OpenAILogger(relativeDir);
const expectedDir = path.resolve(process.cwd(), relativeDir);
createdDirs.push(expectedDir);
expect(logger).toBeInstanceOf(OpenAILogger);
});
it('should expand ~ to home directory', () => {
const customDir = '~/custom-logs';
const logger = new OpenAILogger(customDir);
expect(logger).toBeInstanceOf(OpenAILogger);
});
it('should expand ~/ to home directory', () => {
const customDir = '~/custom-logs';
const logger = new OpenAILogger(customDir);
expect(logger).toBeInstanceOf(OpenAILogger);
});
it('should handle just ~ as home directory', () => {
const customDir = '~';
const logger = new OpenAILogger(customDir);
expect(logger).toBeInstanceOf(OpenAILogger);
});
});
describe('initialize', () => {
it('should create directory if it does not exist', async () => {
const logger = new OpenAILogger(testTempDir);
await logger.initialize();
const dirExists = await fs
.access(testTempDir)
.then(() => true)
.catch(() => false);
expect(dirExists).toBe(true);
});
it('should create nested directories recursively', async () => {
const nestedDir = path.join(testTempDir, 'nested', 'deep', 'path');
const logger = new OpenAILogger(nestedDir);
await logger.initialize();
const dirExists = await fs
.access(nestedDir)
.then(() => true)
.catch(() => false);
expect(dirExists).toBe(true);
});
it('should not throw if directory already exists', async () => {
await fs.mkdir(testTempDir, { recursive: true });
const logger = new OpenAILogger(testTempDir);
await expect(logger.initialize()).resolves.not.toThrow();
});
});
describe('logInteraction', () => {
it('should create log file with correct format', async () => {
const logger = new OpenAILogger(testTempDir);
await logger.initialize();
const request = {
model: 'gpt-4',
messages: [{ role: 'user', content: 'test' }],
};
const response = { id: 'test-id', choices: [] };
const logPath = await logger.logInteraction(request, response);
expect(logPath).toContain(testTempDir);
expect(logPath).toMatch(
/openai-\d{4}-\d{2}-\d{2}T\d{2}-\d{2}-\d{2}\.\d{3}Z-[a-f0-9]{8}\.json/,
);
const fileExists = await fs
.access(logPath)
.then(() => true)
.catch(() => false);
expect(fileExists).toBe(true);
});
it('should write correct log data structure', async () => {
const logger = new OpenAILogger(testTempDir);
await logger.initialize();
const request = {
model: 'gpt-4',
messages: [{ role: 'user', content: 'test' }],
};
const response = { id: 'test-id', choices: [] };
const logPath = await logger.logInteraction(request, response);
const logContent = JSON.parse(await fs.readFile(logPath, 'utf-8'));
expect(logContent).toHaveProperty('timestamp');
expect(logContent).toHaveProperty('request', request);
expect(logContent).toHaveProperty('response', response);
expect(logContent).toHaveProperty('error', null);
expect(logContent).toHaveProperty('system');
expect(logContent.system).toHaveProperty('hostname');
expect(logContent.system).toHaveProperty('platform');
expect(logContent.system).toHaveProperty('release');
expect(logContent.system).toHaveProperty('nodeVersion');
});
it('should log error when provided', async () => {
const logger = new OpenAILogger(testTempDir);
await logger.initialize();
const request = {
model: 'gpt-4',
messages: [{ role: 'user', content: 'test' }],
};
const error = new Error('Test error');
const logPath = await logger.logInteraction(request, undefined, error);
const logContent = JSON.parse(await fs.readFile(logPath, 'utf-8'));
expect(logContent).toHaveProperty('error');
expect(logContent.error).toHaveProperty('message', 'Test error');
expect(logContent.error).toHaveProperty('stack');
expect(logContent.response).toBeNull();
});
it('should use custom directory when provided', async () => {
const customDir = path.join(testTempDir, 'custom-logs');
const logger = new OpenAILogger(customDir);
await logger.initialize();
const request = {
model: 'gpt-4',
messages: [{ role: 'user', content: 'test' }],
};
const response = { id: 'test-id', choices: [] };
const logPath = await logger.logInteraction(request, response);
expect(logPath).toContain(customDir);
expect(logPath.startsWith(customDir)).toBe(true);
});
it('should resolve relative path correctly', async () => {
const relativeDir = 'relative-logs';
const logger = new OpenAILogger(relativeDir);
await logger.initialize();
const request = {
model: 'gpt-4',
messages: [{ role: 'user', content: 'test' }],
};
const response = { id: 'test-id', choices: [] };
const logPath = await logger.logInteraction(request, response);
const expectedDir = path.resolve(process.cwd(), relativeDir);
createdDirs.push(expectedDir);
expect(logPath).toContain(expectedDir);
});
it('should expand ~ correctly', async () => {
const customDir = '~/test-openai-logs';
const logger = new OpenAILogger(customDir);
await logger.initialize();
const request = {
model: 'gpt-4',
messages: [{ role: 'user', content: 'test' }],
};
const response = { id: 'test-id', choices: [] };
const logPath = await logger.logInteraction(request, response);
const expectedDir = path.join(os.homedir(), 'test-openai-logs');
createdDirs.push(expectedDir);
expect(logPath).toContain(expectedDir);
});
});
describe('getLogFiles', () => {
it('should return empty array when directory does not exist', async () => {
const logger = new OpenAILogger(testTempDir);
const files = await logger.getLogFiles();
expect(files).toEqual([]);
});
it('should return log files after initialization', async () => {
const logger = new OpenAILogger(testTempDir);
await logger.initialize();
const request = {
model: 'gpt-4',
messages: [{ role: 'user', content: 'test' }],
};
const response = { id: 'test-id', choices: [] };
await logger.logInteraction(request, response);
const files = await logger.getLogFiles();
expect(files.length).toBeGreaterThan(0);
expect(files[0]).toMatch(/openai-.*\.json$/);
});
it('should return only log files matching pattern', async () => {
const logger = new OpenAILogger(testTempDir);
await logger.initialize();
// Create a log file
await logger.logInteraction({ test: 'request' }, { test: 'response' });
// Create a non-log file
await fs.writeFile(path.join(testTempDir, 'other-file.txt'), 'content');
const files = await logger.getLogFiles();
expect(files.length).toBe(1);
expect(files[0]).toMatch(/openai-.*\.json$/);
});
it('should respect limit parameter', async () => {
const logger = new OpenAILogger(testTempDir);
await logger.initialize();
// Create multiple log files
for (let i = 0; i < 5; i++) {
await logger.logInteraction(
{ test: `request-${i}` },
{ test: `response-${i}` },
);
// Small delay to ensure different timestamps
await new Promise((resolve) => setTimeout(resolve, 10));
}
const allFiles = await logger.getLogFiles();
expect(allFiles.length).toBe(5);
const limitedFiles = await logger.getLogFiles(3);
expect(limitedFiles.length).toBe(3);
});
it('should return files sorted by most recent first', async () => {
const logger = new OpenAILogger(testTempDir);
await logger.initialize();
const files: string[] = [];
for (let i = 0; i < 3; i++) {
const logPath = await logger.logInteraction(
{ test: `request-${i}` },
{ test: `response-${i}` },
);
files.push(logPath);
await new Promise((resolve) => setTimeout(resolve, 10));
}
const retrievedFiles = await logger.getLogFiles();
expect(retrievedFiles[0]).toBe(files[2]); // Most recent first
expect(retrievedFiles[1]).toBe(files[1]);
expect(retrievedFiles[2]).toBe(files[0]);
});
});
describe('readLogFile', () => {
it('should read and parse log file correctly', async () => {
const logger = new OpenAILogger(testTempDir);
await logger.initialize();
const request = {
model: 'gpt-4',
messages: [{ role: 'user', content: 'test' }],
};
const response = { id: 'test-id', choices: [] };
const logPath = await logger.logInteraction(request, response);
const logData = await logger.readLogFile(logPath);
expect(logData).toHaveProperty('timestamp');
expect(logData).toHaveProperty('request', request);
expect(logData).toHaveProperty('response', response);
});
it('should throw error when file does not exist', async () => {
const logger = new OpenAILogger(testTempDir);
const nonExistentPath = path.join(testTempDir, 'non-existent.json');
await expect(logger.readLogFile(nonExistentPath)).rejects.toThrow();
});
});
describe('path resolution', () => {
it('should normalize absolute paths', () => {
const absolutePath = '/tmp/test/logs';
const logger = new OpenAILogger(absolutePath);
expect(logger).toBeInstanceOf(OpenAILogger);
});
it('should resolve relative paths based on current working directory', async () => {
const relativePath = 'test-relative-logs';
const logger = new OpenAILogger(relativePath);
await logger.initialize();
const request = { test: 'request' };
const response = { test: 'response' };
const logPath = await logger.logInteraction(request, response);
const expectedBaseDir = path.resolve(process.cwd(), relativePath);
createdDirs.push(expectedBaseDir);
expect(logPath).toContain(expectedBaseDir);
});
it('should handle paths with special characters', async () => {
const specialPath = path.join(testTempDir, 'logs-with-special-chars');
const logger = new OpenAILogger(specialPath);
await logger.initialize();
const request = { test: 'request' };
const response = { test: 'response' };
const logPath = await logger.logInteraction(request, response);
expect(logPath).toContain(specialPath);
});
});
});

View File

@@ -18,23 +18,10 @@ export class OpenAILogger {
/**
* Creates a new OpenAI logger
* @param customLogDir Optional custom log directory path (supports relative paths, absolute paths, and ~ expansion)
* @param customLogDir Optional custom log directory path
*/
constructor(customLogDir?: string) {
if (customLogDir) {
// Resolve relative paths to absolute paths
// Handle ~ expansion
let resolvedPath = customLogDir;
if (customLogDir === '~' || customLogDir.startsWith('~/')) {
resolvedPath = path.join(os.homedir(), customLogDir.slice(1));
} else if (!path.isAbsolute(customLogDir)) {
// If it's a relative path, resolve it relative to current working directory
resolvedPath = path.resolve(process.cwd(), customLogDir);
}
this.logDir = path.normalize(resolvedPath);
} else {
this.logDir = path.join(process.cwd(), 'logs', 'openai');
}
this.logDir = customLogDir || path.join(process.cwd(), 'logs', 'openai');
}
/**

View File

@@ -4,53 +4,8 @@
* SPDX-License-Identifier: Apache-2.0
*/
import fs from 'node:fs';
import os from 'node:os';
import path from 'node:path';
import { describe, it, expect, beforeAll, afterAll, vi } from 'vitest';
import {
escapePath,
resolvePath,
validatePath,
resolveAndValidatePath,
unescapePath,
isSubpath,
} from './paths.js';
import type { Config } from '../config/config.js';
function createConfigStub({
targetDir,
allowedDirectories,
}: {
targetDir: string;
allowedDirectories: string[];
}): Config {
const resolvedTargetDir = path.resolve(targetDir);
const resolvedDirectories = allowedDirectories.map((dir) =>
path.resolve(dir),
);
const workspaceContext = {
isPathWithinWorkspace(testPath: string) {
const resolvedPath = path.resolve(testPath);
return resolvedDirectories.some((dir) => {
const relative = path.relative(dir, resolvedPath);
return (
relative === '' ||
(!relative.startsWith('..') && !path.isAbsolute(relative))
);
});
},
getDirectories() {
return resolvedDirectories;
},
};
return {
getTargetDir: () => resolvedTargetDir,
getWorkspaceContext: () => workspaceContext,
} as unknown as Config;
}
import { describe, it, expect, beforeAll, afterAll } from 'vitest';
import { escapePath, unescapePath, isSubpath } from './paths.js';
describe('escapePath', () => {
it('should escape spaces', () => {
@@ -359,240 +314,3 @@ describe('isSubpath on Windows', () => {
expect(isSubpath('Users\\Test\\file.txt', 'Users\\Test')).toBe(false);
});
});
describe('resolvePath', () => {
it('resolves relative paths against the provided base directory', () => {
const result = resolvePath('/home/user/project', 'src/main.ts');
expect(result).toBe(path.resolve('/home/user/project', 'src/main.ts'));
});
it('resolves relative paths against cwd when baseDir is undefined', () => {
const cwd = process.cwd();
const result = resolvePath(undefined, 'src/main.ts');
expect(result).toBe(path.resolve(cwd, 'src/main.ts'));
});
it('returns absolute paths unchanged', () => {
const absolutePath = '/absolute/path/to/file.ts';
const result = resolvePath('/some/base', absolutePath);
expect(result).toBe(absolutePath);
});
it('expands tilde to home directory', () => {
const homeDir = os.homedir();
const result = resolvePath(undefined, '~');
expect(result).toBe(homeDir);
});
it('expands tilde-prefixed paths to home directory', () => {
const homeDir = os.homedir();
const result = resolvePath(undefined, '~/documents/file.txt');
expect(result).toBe(path.join(homeDir, 'documents/file.txt'));
});
it('uses baseDir when provided for relative paths', () => {
const baseDir = '/custom/base';
const result = resolvePath(baseDir, './relative/path');
expect(result).toBe(path.resolve(baseDir, './relative/path'));
});
it('handles tilde expansion regardless of baseDir', () => {
const homeDir = os.homedir();
const result = resolvePath('/some/base', '~/file.txt');
expect(result).toBe(path.join(homeDir, 'file.txt'));
});
it('handles dot paths correctly', () => {
const result = resolvePath('/base/dir', '.');
expect(result).toBe(path.resolve('/base/dir', '.'));
});
it('handles parent directory references', () => {
const result = resolvePath('/base/dir/subdir', '..');
expect(result).toBe(path.resolve('/base/dir/subdir', '..'));
});
});
describe('validatePath', () => {
let workspaceRoot: string;
let config: Config;
beforeAll(() => {
workspaceRoot = fs.mkdtempSync(
path.join(os.tmpdir(), 'validate-path-test-'),
);
fs.mkdirSync(path.join(workspaceRoot, 'subdir'));
config = createConfigStub({
targetDir: workspaceRoot,
allowedDirectories: [workspaceRoot],
});
});
afterAll(() => {
fs.rmSync(workspaceRoot, { recursive: true, force: true });
});
it('validates paths within workspace boundaries', () => {
const validPath = path.join(workspaceRoot, 'subdir');
expect(() => validatePath(config, validPath)).not.toThrow();
});
it('throws when path is outside workspace boundaries', () => {
const outsidePath = path.join(os.tmpdir(), 'outside');
expect(() => validatePath(config, outsidePath)).toThrowError(
/Path is not within workspace/,
);
});
it('throws when path does not exist', () => {
const nonExistentPath = path.join(workspaceRoot, 'nonexistent');
expect(() => validatePath(config, nonExistentPath)).toThrowError(
/Path does not exist:/,
);
});
it('throws when path is a file, not a directory (default behavior)', () => {
const filePath = path.join(workspaceRoot, 'test-file.txt');
fs.writeFileSync(filePath, 'content');
try {
expect(() => validatePath(config, filePath)).toThrowError(
/Path is not a directory/,
);
} finally {
fs.rmSync(filePath);
}
});
it('allows files when allowFiles option is true', () => {
const filePath = path.join(workspaceRoot, 'test-file.txt');
fs.writeFileSync(filePath, 'content');
try {
expect(() =>
validatePath(config, filePath, { allowFiles: true }),
).not.toThrow();
} finally {
fs.rmSync(filePath);
}
});
it('validates paths at workspace root', () => {
expect(() => validatePath(config, workspaceRoot)).not.toThrow();
});
it('validates paths in allowed directories', () => {
const extraDir = fs.mkdtempSync(path.join(os.tmpdir(), 'validate-extra-'));
try {
const configWithExtra = createConfigStub({
targetDir: workspaceRoot,
allowedDirectories: [workspaceRoot, extraDir],
});
expect(() => validatePath(configWithExtra, extraDir)).not.toThrow();
} finally {
fs.rmSync(extraDir, { recursive: true, force: true });
}
});
});
describe('resolveAndValidatePath', () => {
let workspaceRoot: string;
let config: Config;
beforeAll(() => {
workspaceRoot = fs.mkdtempSync(
path.join(os.tmpdir(), 'resolve-and-validate-'),
);
fs.mkdirSync(path.join(workspaceRoot, 'subdir'));
config = createConfigStub({
targetDir: workspaceRoot,
allowedDirectories: [workspaceRoot],
});
});
afterAll(() => {
fs.rmSync(workspaceRoot, { recursive: true, force: true });
});
it('returns the target directory when no path is provided', () => {
expect(resolveAndValidatePath(config)).toBe(workspaceRoot);
});
it('resolves relative paths within the workspace', () => {
const expected = path.join(workspaceRoot, 'subdir');
expect(resolveAndValidatePath(config, 'subdir')).toBe(expected);
});
it('allows absolute paths that are permitted by the workspace context', () => {
const extraDir = fs.mkdtempSync(
path.join(os.tmpdir(), 'resolve-and-validate-extra-'),
);
try {
const configWithExtra = createConfigStub({
targetDir: workspaceRoot,
allowedDirectories: [workspaceRoot, extraDir],
});
expect(resolveAndValidatePath(configWithExtra, extraDir)).toBe(extraDir);
} finally {
fs.rmSync(extraDir, { recursive: true, force: true });
}
});
it('expands tilde-prefixed paths using the home directory', () => {
const fakeHome = fs.mkdtempSync(
path.join(os.tmpdir(), 'resolve-and-validate-home-'),
);
const homeSubdir = path.join(fakeHome, 'project');
fs.mkdirSync(homeSubdir);
const homedirSpy = vi.spyOn(os, 'homedir').mockReturnValue(fakeHome);
try {
const configWithHome = createConfigStub({
targetDir: workspaceRoot,
allowedDirectories: [workspaceRoot, fakeHome],
});
expect(resolveAndValidatePath(configWithHome, '~/project')).toBe(
homeSubdir,
);
expect(resolveAndValidatePath(configWithHome, '~')).toBe(fakeHome);
} finally {
homedirSpy.mockRestore();
fs.rmSync(fakeHome, { recursive: true, force: true });
}
});
it('throws when the path resolves outside of the workspace', () => {
expect(() => resolveAndValidatePath(config, '../outside')).toThrowError(
/Path is not within workspace/,
);
});
it('throws when the path does not exist', () => {
expect(() => resolveAndValidatePath(config, 'missing')).toThrowError(
/Path does not exist:/,
);
});
it('throws when the path points to a file (default behavior)', () => {
const filePath = path.join(workspaceRoot, 'file.txt');
fs.writeFileSync(filePath, 'content');
try {
expect(() => resolveAndValidatePath(config, 'file.txt')).toThrowError(
`Path is not a directory: ${filePath}`,
);
} finally {
fs.rmSync(filePath);
}
});
it('allows file paths when allowFiles option is true', () => {
const filePath = path.join(workspaceRoot, 'file.txt');
fs.writeFileSync(filePath, 'content');
try {
const result = resolveAndValidatePath(config, 'file.txt', {
allowFiles: true,
});
expect(result).toBe(filePath);
} finally {
fs.rmSync(filePath);
}
});
});

View File

@@ -4,12 +4,9 @@
* SPDX-License-Identifier: Apache-2.0
*/
import fs from 'node:fs';
import path from 'node:path';
import os from 'node:os';
import * as crypto from 'node:crypto';
import type { Config } from '../config/config.js';
import { isNodeError } from './errors.js';
export const QWEN_DIR = '.qwen';
export const GOOGLE_ACCOUNTS_FILENAME = 'google_accounts.json';
@@ -194,93 +191,3 @@ export function isSubpath(parentPath: string, childPath: string): boolean {
!pathModule.isAbsolute(relative)
);
}
/**
* Resolves a path with tilde (~) expansion and relative path resolution.
* Handles tilde expansion for home directory and resolves relative paths
* against the provided base directory or current working directory.
*
* @param baseDir The base directory to resolve relative paths against (defaults to current working directory)
* @param relativePath The path to resolve (can be relative, absolute, or tilde-prefixed)
* @returns The resolved absolute path
*/
export function resolvePath(
baseDir: string | undefined = process.cwd(),
relativePath: string,
): string {
const homeDir = os.homedir();
if (relativePath === '~') {
return homeDir;
} else if (relativePath.startsWith('~/')) {
return path.join(homeDir, relativePath.slice(2));
} else if (path.isAbsolute(relativePath)) {
return relativePath;
} else {
return path.resolve(baseDir, relativePath);
}
}
export interface PathValidationOptions {
/**
* If true, allows both files and directories. If false (default), only allows directories.
*/
allowFiles?: boolean;
}
/**
* Validates that a resolved path exists within the workspace boundaries.
*
* @param config The configuration object containing workspace context
* @param resolvedPath The absolute path to validate
* @param options Validation options
* @throws Error if the path is outside workspace boundaries, doesn't exist, or is not a directory (when allowFiles is false)
*/
export function validatePath(
config: Config,
resolvedPath: string,
options: PathValidationOptions = {},
): void {
const { allowFiles = false } = options;
const workspaceContext = config.getWorkspaceContext();
if (!workspaceContext.isPathWithinWorkspace(resolvedPath)) {
throw new Error('Path is not within workspace');
}
try {
const stats = fs.statSync(resolvedPath);
if (!allowFiles && !stats.isDirectory()) {
throw new Error(`Path is not a directory: ${resolvedPath}`);
}
} catch (error: unknown) {
if (isNodeError(error) && error.code === 'ENOENT') {
throw new Error(`Path does not exist: ${resolvedPath}`);
}
throw error;
}
}
/**
* Resolves a path relative to the workspace root and verifies that it exists
* within the workspace boundaries defined in the config.
*
* @param config The configuration object
* @param relativePath The relative path to resolve (optional, defaults to target directory)
* @param options Validation options (e.g., allowFiles to permit file paths)
*/
export function resolveAndValidatePath(
config: Config,
relativePath?: string,
options: PathValidationOptions = {},
): string {
const targetDir = config.getTargetDir();
if (!relativePath) {
return targetDir;
}
const resolvedPath = resolvePath(targetDir, relativePath);
validatePath(config, resolvedPath, options);
return resolvedPath;
}

View File

@@ -152,16 +152,7 @@ describe('ripgrepUtils', () => {
});
describe('canUseRipgrep', () => {
it('should return true if ripgrep binary exists (builtin)', async () => {
(fileExists as Mock).mockResolvedValue(true);
const result = await canUseRipgrep(true);
expect(result).toBe(true);
expect(fileExists).toHaveBeenCalledOnce();
});
it('should return true if ripgrep binary exists (default)', async () => {
it('should return true if ripgrep binary exists', async () => {
(fileExists as Mock).mockResolvedValue(true);
const result = await canUseRipgrep();
@@ -170,26 +161,15 @@ describe('ripgrepUtils', () => {
expect(fileExists).toHaveBeenCalledOnce();
});
it('should fall back to system rg if bundled ripgrep binary does not exist', async () => {
it('should return false if ripgrep binary does not exist', async () => {
(fileExists as Mock).mockResolvedValue(false);
// When useBuiltin is true but bundled binary doesn't exist,
// it should fall back to checking system rg (which will spawn a process)
// In this test environment, system rg is likely available, so result should be true
// unless spawn fails
const result = await canUseRipgrep();
// The test may pass or fail depending on system rg availability
// Just verify that fileExists was called to check bundled binary first
expect(result).toBe(false);
expect(fileExists).toHaveBeenCalledOnce();
// Result depends on whether system rg is installed
expect(typeof result).toBe('boolean');
});
// Note: Tests for system ripgrep detection (useBuiltin=false) would require mocking
// the child_process spawn function, which is complex in ESM. These cases are tested
// indirectly through integration tests.
it('should return false if platform is unsupported', async () => {
const originalPlatform = process.platform;

View File

@@ -85,31 +85,13 @@ export function getRipgrepPath(): string {
/**
* Checks if ripgrep binary is available
* @param useBuiltin If true, tries bundled ripgrep first, then falls back to system ripgrep.
* If false, only checks for system ripgrep.
*/
export async function canUseRipgrep(
useBuiltin: boolean = true,
): Promise<boolean> {
export async function canUseRipgrep(): Promise<boolean> {
try {
if (useBuiltin) {
// Try bundled ripgrep first
const rgPath = getRipgrepPath();
if (await fileExists(rgPath)) {
return true;
}
// Fallback to system rg if bundled binary is not available
}
// Check for system ripgrep by trying to spawn 'rg --version'
const { spawn } = await import('node:child_process');
return await new Promise<boolean>((resolve) => {
const proc = spawn('rg', ['--version']);
proc.on('error', () => resolve(false));
proc.on('exit', (code) => resolve(code === 0));
});
const rgPath = getRipgrepPath();
return await fileExists(rgPath);
} catch (_error) {
// Unsupported platform/arch or other error
// Unsupported platform/arch
return false;
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "@qwen-code/qwen-code-test-utils",
"version": "0.1.5",
"version": "0.1.2",
"private": true,
"main": "src/index.ts",
"license": "Apache-2.0",

View File

@@ -2,7 +2,7 @@
"name": "qwen-code-vscode-ide-companion",
"displayName": "Qwen Code Companion",
"description": "Enable Qwen Code with direct access to your VS Code workspace.",
"version": "0.1.5",
"version": "0.1.2",
"publisher": "qwenlm",
"icon": "assets/icon.png",
"repository": {

View File

@@ -85,7 +85,7 @@ const distPackageJson = {
bin: {
qwen: 'cli.js',
},
files: ['cli.js', 'vendor', '*.sb', 'README.md', 'LICENSE'],
files: ['cli.js', 'vendor', 'README.md', 'LICENSE'],
config: rootPackageJson.config,
dependencies: runtimeDependencies,
optionalDependencies: {

View File

@@ -69,14 +69,7 @@ if (process.env.DEBUG) {
// than the relaunched process making it harder to debug.
env.GEMINI_CLI_NO_RELAUNCH = 'true';
}
// Use process.cwd() to inherit the working directory from launch.json cwd setting
// This allows debugging from a specific directory (e.g., .todo)
const workingDir = process.env.QWEN_WORKING_DIR || process.cwd();
const child = spawn('node', nodeArgs, {
stdio: 'inherit',
env,
cwd: workingDir,
});
const child = spawn('node', nodeArgs, { stdio: 'inherit', env });
child.on('close', (code) => {
process.exit(code);