Compare commits

..

2 Commits

40 changed files with 839 additions and 576 deletions

View File

@@ -5,13 +5,11 @@ Qwen Code supports two authentication methods. Pick the one that matches how you
- **Qwen OAuth (recommended)**: sign in with your `qwen.ai` account in a browser.
- **OpenAI-compatible API**: use an API key (OpenAI or any OpenAI-compatible provider / endpoint).
![](https://img.alicdn.com/imgextra/i2/O1CN01IxI1bt1sNO543AVTT_!!6000000005754-0-tps-1958-822.jpg)
## Option 1: Qwen OAuth (recommended & free) 👍
Use this if you want the simplest setup and you're using Qwen models.
Use this if you want the simplest setup and youre using Qwen models.
- **How it works**: on first start, Qwen Code opens a browser login page. After you finish, credentials are cached locally so you usually won't need to log in again.
- **How it works**: on first start, Qwen Code opens a browser login page. After you finish, credentials are cached locally so you usually wont need to log in again.
- **Requirements**: a `qwen.ai` account + internet access (at least for the first login).
- **Benefits**: no API key management, automatic credential refresh.
- **Cost & quota**: free, with a quota of **60 requests/minute** and **2,000 requests/day**.
@@ -26,54 +24,15 @@ qwen
Use this if you want to use OpenAI models or any provider that exposes an OpenAI-compatible API (e.g. OpenAI, Azure OpenAI, OpenRouter, ModelScope, Alibaba Cloud Bailian, or a self-hosted compatible endpoint).
### Recommended: Coding Plan (subscription-based) 🚀
### Quick start (interactive, recommended for local use)
Use this if you want predictable costs with higher usage quotas for the qwen3-coder-plus model.
When you choose the OpenAI-compatible option in the CLI, it will prompt you for:
> [!IMPORTANT]
>
> Coding Plan is only available for users in China mainland (Beijing region).
- **API key**
- **Base URL** (default: `https://api.openai.com/v1`)
- **Model** (default: `gpt-4o`)
- **How it works**: subscribe to the Coding Plan with a fixed monthly fee, then configure Qwen Code to use the dedicated endpoint and your subscription API key.
- **Requirements**: an active Coding Plan subscription from [Alibaba Cloud Bailian](https://bailian.console.aliyun.com/cn-beijing/?tab=globalset#/efm/coding_plan).
- **Benefits**: higher usage quotas, predictable monthly costs, access to latest qwen3-coder-plus model.
- **Cost & quota**: varies by plan (see table below).
#### Coding Plan Pricing & Quotas
| Feature | Lite Basic Plan | Pro Advanced Plan |
| :------------------ | :-------------------- | :-------------------- |
| **Price** | ¥40/month | ¥200/month |
| **5-Hour Limit** | Up to 1,200 requests | Up to 6,000 requests |
| **Weekly Limit** | Up to 9,000 requests | Up to 45,000 requests |
| **Monthly Limit** | Up to 18,000 requests | Up to 90,000 requests |
| **Supported Model** | qwen3-coder-plus | qwen3-coder-plus |
#### Quick Setup for Coding Plan
When you select the OpenAI-compatible option in the CLI, enter these values:
- **API key**: `sk-sp-xxxxx`
- **Base URL**: `https://coding.dashscope.aliyuncs.com/v1`
- **Model**: `qwen3-coder-plus`
> **Note**: Coding Plan API keys have the format `sk-sp-xxxxx`, which is different from standard Alibaba Cloud API keys.
#### Configure via Environment Variables
Set these environment variables to use Coding Plan:
```bash
export OPENAI_API_KEY="your-coding-plan-api-key" # Format: sk-sp-xxxxx
export OPENAI_BASE_URL="https://coding.dashscope.aliyuncs.com/v1"
export OPENAI_MODEL="qwen3-coder-plus"
```
For more details about Coding Plan, including subscription options and troubleshooting, see the [full Coding Plan documentation](https://bailian.console.aliyun.com/cn-beijing/?tab=doc#/doc/?type=model&url=3005961).
### Other OpenAI-compatible Providers
If you are using other providers (OpenAI, Azure, local LLMs, etc.), use the following configuration methods.
> **Note:** the CLI may display the key in plain text for verification. Make sure your terminal is not being recorded or shared.
### Configure via command-line arguments

View File

@@ -241,6 +241,7 @@ Per-field precedence for `generationConfig`:
| ------------------------------------------------- | -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| `context.fileName` | string or array of strings | The name of the context file(s). | `undefined` |
| `context.importFormat` | string | The format to use when importing memory. | `undefined` |
| `context.discoveryMaxDirs` | number | Maximum number of directories to search for memory. | `200` |
| `context.includeDirectories` | array | Additional directories to include in the workspace context. Specifies an array of additional absolute or relative paths to include in the workspace context. Missing directories will be skipped with a warning by default. Paths can use `~` to refer to the user's home directory. This setting can be combined with the `--include-directories` command-line flag. | `[]` |
| `context.loadFromIncludeDirectories` | boolean | Controls the behavior of the `/memory refresh` command. If set to `true`, `QWEN.md` files should be loaded from all directories that are added. If set to `false`, `QWEN.md` should only be loaded from the current directory. | `false` |
| `context.fileFiltering.respectGitIgnore` | boolean | Respect .gitignore files when searching. | `true` |
@@ -310,12 +311,6 @@ If you are experiencing performance issues with file searching (e.g., with `@` c
>
> **Note about advanced.tavilyApiKey:** This is a legacy configuration format. For Qwen OAuth users, DashScope provider is automatically available without any configuration. For other authentication types, configure Tavily or Google providers using the new `webSearch` configuration format.
#### experimental
| Setting | Type | Description | Default |
| --------------------- | ------- | -------------------------------- | ------- |
| `experimental.skills` | boolean | Enable experimental Agent Skills | `false` |
#### mcpServers
Configures connections to one or more Model-Context Protocol (MCP) servers for discovering and using custom tools. Qwen Code attempts to connect to each configured MCP server to discover available tools. If multiple MCP servers expose a tool with the same name, the tool names will be prefixed with the server alias you defined in the configuration (e.g., `serverAlias__actualToolName`) to avoid conflicts. Note that the system might strip certain schema properties from MCP tool definitions for compatibility. At least one of `command`, `url`, or `httpUrl` must be provided. If multiple are specified, the order of precedence is `httpUrl`, then `url`, then `command`.
@@ -534,13 +529,16 @@ Here's a conceptual example of what a context file at the root of a TypeScript p
This example demonstrates how you can provide general project context, specific coding conventions, and even notes about particular files or components. The more relevant and precise your context files are, the better the AI can assist you. Project-specific context files are highly encouraged to establish conventions and context.
- **Hierarchical Loading and Precedence:** The CLI implements a hierarchical memory system by loading context files (e.g., `QWEN.md`) from several locations. Content from files lower in this list (more specific) typically overrides or supplements content from files higher up (more general). The exact concatenation order and final context can be inspected using the `/memory show` command. The typical loading order is:
- **Hierarchical Loading and Precedence:** The CLI implements a sophisticated hierarchical memory system by loading context files (e.g., `QWEN.md`) from several locations. Content from files lower in this list (more specific) typically overrides or supplements content from files higher up (more general). The exact concatenation order and final context can be inspected using the `/memory show` command. The typical loading order is:
1. **Global Context File:**
- Location: `~/.qwen/<configured-context-filename>` (e.g., `~/.qwen/QWEN.md` in your user home directory).
- Scope: Provides default instructions for all your projects.
2. **Project Root & Ancestors Context Files:**
- Location: The CLI searches for the configured context file in the current working directory and then in each parent directory up to either the project root (identified by a `.git` folder) or your home directory.
- Scope: Provides context relevant to the entire project or a significant portion of it.
3. **Sub-directory Context Files (Contextual/Local):**
- Location: The CLI also scans for the configured context file in subdirectories _below_ the current working directory (respecting common ignore patterns like `node_modules`, `.git`, etc.). The breadth of this search is limited to 200 directories by default, but can be configured with the `context.discoveryMaxDirs` setting in your `settings.json` file.
- Scope: Allows for highly specific instructions relevant to a particular component, module, or subsection of your project.
- **Concatenation & UI Indication:** The contents of all found context files are concatenated (with separators indicating their origin and path) and provided as part of the system prompt. The CLI footer displays the count of loaded context files, giving you a quick visual cue about the active instructional context.
- **Importing Content:** You can modularize your context files by importing other Markdown files using the `@path/to/file.md` syntax. For more details, see the [Memory Import Processor documentation](../configuration/memory).
- **Commands for Memory Management:**

View File

@@ -11,29 +11,12 @@ This guide shows you how to create, use, and manage Agent Skills in **Qwen Code*
## Prerequisites
- Qwen Code (recent version)
## How to enable
### Via CLI flag
- Run with the experimental flag enabled:
```bash
qwen --experimental-skills
```
### Via settings.json
Add to your `~/.qwen/settings.json` or project's `.qwen/settings.json`:
```json
{
"tools": {
"experimental": {
"skills": true
}
}
}
```
- Basic familiarity with Qwen Code ([Quickstart](../quickstart.md))
## What are Agent Skills?

View File

@@ -311,9 +311,9 @@ function setupAcpTest(
}
});
it('returns modes on initialize and allows setting mode and model', async () => {
it('returns modes on initialize and allows setting approval mode', async () => {
const rig = new TestRig();
rig.setup('acp mode and model');
rig.setup('acp approval mode');
const { sendRequest, cleanup, stderr } = setupAcpTest(rig);
@@ -366,14 +366,8 @@ function setupAcpTest(
const newSession = (await sendRequest('session/new', {
cwd: rig.testDir!,
mcpServers: [],
})) as {
sessionId: string;
models: {
availableModels: Array<{ modelId: string }>;
};
};
})) as { sessionId: string };
expect(newSession.sessionId).toBeTruthy();
expect(newSession.models.availableModels.length).toBeGreaterThan(0);
// Test 4: Set approval mode to 'yolo'
const setModeResult = (await sendRequest('session/set_mode', {
@@ -398,15 +392,6 @@ function setupAcpTest(
})) as { modeId: string };
expect(setModeResult3).toBeDefined();
expect(setModeResult3.modeId).toBe('default');
// Test 7: Set model using first available model
const firstModel = newSession.models.availableModels[0];
const setModelResult = (await sendRequest('session/set_model', {
sessionId: newSession.sessionId,
modelId: firstModel.modelId,
})) as { modelId: string };
expect(setModelResult).toBeDefined();
expect(setModelResult.modelId).toBeTruthy();
} catch (e) {
if (stderr.length) {
console.error('Agent stderr:', stderr.join(''));

12
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "@qwen-code/qwen-code",
"version": "0.7.2-preview.0",
"version": "0.7.1",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "@qwen-code/qwen-code",
"version": "0.7.2-preview.0",
"version": "0.7.1",
"workspaces": [
"packages/*"
],
@@ -17310,7 +17310,7 @@
},
"packages/cli": {
"name": "@qwen-code/qwen-code",
"version": "0.7.2-preview.0",
"version": "0.7.1",
"dependencies": {
"@google/genai": "1.30.0",
"@iarna/toml": "^2.2.5",
@@ -17947,7 +17947,7 @@
},
"packages/core": {
"name": "@qwen-code/qwen-code-core",
"version": "0.7.2-preview.0",
"version": "0.7.1",
"hasInstallScript": true,
"dependencies": {
"@anthropic-ai/sdk": "^0.36.1",
@@ -21408,7 +21408,7 @@
},
"packages/test-utils": {
"name": "@qwen-code/qwen-code-test-utils",
"version": "0.7.2-preview.0",
"version": "0.7.1",
"dev": true,
"license": "Apache-2.0",
"devDependencies": {
@@ -21420,7 +21420,7 @@
},
"packages/vscode-ide-companion": {
"name": "qwen-code-vscode-ide-companion",
"version": "0.7.2-preview.0",
"version": "0.7.1",
"license": "LICENSE",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.25.1",

View File

@@ -1,6 +1,6 @@
{
"name": "@qwen-code/qwen-code",
"version": "0.7.2-preview.0",
"version": "0.7.1",
"engines": {
"node": ">=20.0.0"
},
@@ -13,7 +13,7 @@
"url": "git+https://github.com/QwenLM/qwen-code.git"
},
"config": {
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.7.2-preview.0"
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.7.1"
},
"scripts": {
"start": "cross-env node scripts/start.js",

View File

@@ -1,6 +1,6 @@
{
"name": "@qwen-code/qwen-code",
"version": "0.7.2-preview.0",
"version": "0.7.1",
"description": "Qwen Code",
"repository": {
"type": "git",
@@ -33,7 +33,7 @@
"dist"
],
"config": {
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.7.2-preview.0"
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.7.1"
},
"dependencies": {
"@google/genai": "1.30.0",

View File

@@ -70,13 +70,6 @@ export class AgentSideConnection implements Client {
const validatedParams = schema.setModeRequestSchema.parse(params);
return agent.setMode(validatedParams);
}
case schema.AGENT_METHODS.session_set_model: {
if (!agent.setModel) {
throw RequestError.methodNotFound();
}
const validatedParams = schema.setModelRequestSchema.parse(params);
return agent.setModel(validatedParams);
}
default:
throw RequestError.methodNotFound(method);
}
@@ -415,5 +408,4 @@ export interface Agent {
prompt(params: schema.PromptRequest): Promise<schema.PromptResponse>;
cancel(params: schema.CancelNotification): Promise<void>;
setMode?(params: schema.SetModeRequest): Promise<schema.SetModeResponse>;
setModel?(params: schema.SetModelRequest): Promise<schema.SetModelResponse>;
}

View File

@@ -165,11 +165,30 @@ class GeminiAgent {
this.setupFileSystem(config);
const session = await this.createAndStoreSession(config);
const availableModels = this.buildAvailableModels(config);
const configuredModel = (
config.getModel() ||
this.config.getModel() ||
''
).trim();
const modelId = configuredModel || 'default';
const modelName = configuredModel || modelId;
return {
sessionId: session.getId(),
models: availableModels,
models: {
currentModelId: modelId,
availableModels: [
{
modelId,
name: modelName,
description: null,
_meta: {
contextLimit: tokenLimit(modelId),
},
},
],
_meta: null,
},
};
}
@@ -286,29 +305,15 @@ class GeminiAgent {
async setMode(params: acp.SetModeRequest): Promise<acp.SetModeResponse> {
const session = this.sessions.get(params.sessionId);
if (!session) {
throw acp.RequestError.invalidParams(
`Session not found for id: ${params.sessionId}`,
);
throw new Error(`Session not found: ${params.sessionId}`);
}
return session.setMode(params);
}
async setModel(params: acp.SetModelRequest): Promise<acp.SetModelResponse> {
const session = this.sessions.get(params.sessionId);
if (!session) {
throw acp.RequestError.invalidParams(
`Session not found for id: ${params.sessionId}`,
);
}
return session.setModel(params);
}
private async ensureAuthenticated(config: Config): Promise<void> {
const selectedType = this.settings.merged.security?.auth?.selectedType;
if (!selectedType) {
throw acp.RequestError.authRequired(
'Use Qwen Code CLI to authenticate first.',
);
throw acp.RequestError.authRequired('No Selected Type');
}
try {
@@ -377,43 +382,4 @@ class GeminiAgent {
return session;
}
private buildAvailableModels(
config: Config,
): acp.NewSessionResponse['models'] {
const currentModelId = (
config.getModel() ||
this.config.getModel() ||
''
).trim();
const availableModels = config.getAvailableModels();
const mappedAvailableModels = availableModels.map((model) => ({
modelId: model.id,
name: model.label,
description: model.description ?? null,
_meta: {
contextLimit: tokenLimit(model.id),
},
}));
if (
currentModelId &&
!mappedAvailableModels.some((model) => model.modelId === currentModelId)
) {
mappedAvailableModels.unshift({
modelId: currentModelId,
name: currentModelId,
description: null,
_meta: {
contextLimit: tokenLimit(currentModelId),
},
});
}
return {
currentModelId,
availableModels: mappedAvailableModels,
};
}
}

View File

@@ -15,7 +15,6 @@ export const AGENT_METHODS = {
session_prompt: 'session/prompt',
session_list: 'session/list',
session_set_mode: 'session/set_mode',
session_set_model: 'session/set_model',
};
export const CLIENT_METHODS = {
@@ -267,18 +266,6 @@ export const modelInfoSchema = z.object({
name: z.string(),
});
export const setModelRequestSchema = z.object({
sessionId: z.string(),
modelId: z.string(),
});
export const setModelResponseSchema = z.object({
modelId: z.string(),
});
export type SetModelRequest = z.infer<typeof setModelRequestSchema>;
export type SetModelResponse = z.infer<typeof setModelResponseSchema>;
export const sessionModelStateSchema = z.object({
_meta: acpMetaSchema,
availableModels: z.array(modelInfoSchema),
@@ -605,7 +592,6 @@ export const agentResponseSchema = z.union([
promptResponseSchema,
listSessionsResponseSchema,
setModeResponseSchema,
setModelResponseSchema,
]);
export const requestPermissionRequestSchema = z.object({
@@ -638,7 +624,6 @@ export const agentRequestSchema = z.union([
promptRequestSchema,
listSessionsRequestSchema,
setModeRequestSchema,
setModelRequestSchema,
]);
export const agentNotificationSchema = sessionNotificationSchema;

View File

@@ -1,174 +0,0 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { Session } from './Session.js';
import type { Config, GeminiChat } from '@qwen-code/qwen-code-core';
import { ApprovalMode } from '@qwen-code/qwen-code-core';
import type * as acp from '../acp.js';
import type { LoadedSettings } from '../../config/settings.js';
import * as nonInteractiveCliCommands from '../../nonInteractiveCliCommands.js';
vi.mock('../../nonInteractiveCliCommands.js', () => ({
getAvailableCommands: vi.fn(),
handleSlashCommand: vi.fn(),
}));
describe('Session', () => {
let mockChat: GeminiChat;
let mockConfig: Config;
let mockClient: acp.Client;
let mockSettings: LoadedSettings;
let session: Session;
let currentModel: string;
let setModelSpy: ReturnType<typeof vi.fn>;
let getAvailableCommandsSpy: ReturnType<typeof vi.fn>;
beforeEach(() => {
currentModel = 'qwen3-code-plus';
setModelSpy = vi.fn().mockImplementation(async (modelId: string) => {
currentModel = modelId;
});
mockChat = {
sendMessageStream: vi.fn(),
addHistory: vi.fn(),
} as unknown as GeminiChat;
mockConfig = {
setApprovalMode: vi.fn(),
setModel: setModelSpy,
getModel: vi.fn().mockImplementation(() => currentModel),
} as unknown as Config;
mockClient = {
sessionUpdate: vi.fn().mockResolvedValue(undefined),
requestPermission: vi.fn().mockResolvedValue({
outcome: { outcome: 'selected', optionId: 'proceed_once' },
}),
sendCustomNotification: vi.fn().mockResolvedValue(undefined),
} as unknown as acp.Client;
mockSettings = {
merged: {},
} as LoadedSettings;
getAvailableCommandsSpy = vi.mocked(nonInteractiveCliCommands)
.getAvailableCommands as unknown as ReturnType<typeof vi.fn>;
getAvailableCommandsSpy.mockResolvedValue([]);
session = new Session(
'test-session-id',
mockChat,
mockConfig,
mockClient,
mockSettings,
);
});
describe('setMode', () => {
it.each([
['plan', ApprovalMode.PLAN],
['default', ApprovalMode.DEFAULT],
['auto-edit', ApprovalMode.AUTO_EDIT],
['yolo', ApprovalMode.YOLO],
] as const)('maps %s mode', async (modeId, expected) => {
const result = await session.setMode({
sessionId: 'test-session-id',
modeId,
});
expect(mockConfig.setApprovalMode).toHaveBeenCalledWith(expected);
expect(result).toEqual({ modeId });
});
});
describe('setModel', () => {
it('sets model via config and returns current model', async () => {
const result = await session.setModel({
sessionId: 'test-session-id',
modelId: ' qwen3-coder-plus ',
});
expect(mockConfig.setModel).toHaveBeenCalledWith('qwen3-coder-plus', {
reason: 'user_request_acp',
context: 'session/set_model',
});
expect(mockConfig.getModel).toHaveBeenCalled();
expect(result).toEqual({ modelId: 'qwen3-coder-plus' });
});
it('rejects empty/whitespace model IDs', async () => {
await expect(
session.setModel({
sessionId: 'test-session-id',
modelId: ' ',
}),
).rejects.toThrow('Invalid params');
expect(mockConfig.setModel).not.toHaveBeenCalled();
});
it('propagates errors from config.setModel', async () => {
const configError = new Error('Invalid model');
setModelSpy.mockRejectedValueOnce(configError);
await expect(
session.setModel({
sessionId: 'test-session-id',
modelId: 'invalid-model',
}),
).rejects.toThrow('Invalid model');
});
});
describe('sendAvailableCommandsUpdate', () => {
it('sends available_commands_update from getAvailableCommands()', async () => {
getAvailableCommandsSpy.mockResolvedValueOnce([
{
name: 'init',
description: 'Initialize project context',
},
]);
await session.sendAvailableCommandsUpdate();
expect(getAvailableCommandsSpy).toHaveBeenCalledWith(
mockConfig,
expect.any(AbortSignal),
);
expect(mockClient.sessionUpdate).toHaveBeenCalledWith({
sessionId: 'test-session-id',
update: {
sessionUpdate: 'available_commands_update',
availableCommands: [
{
name: 'init',
description: 'Initialize project context',
input: null,
},
],
},
});
});
it('swallows errors and does not throw', async () => {
const consoleErrorSpy = vi
.spyOn(console, 'error')
.mockImplementation(() => undefined);
getAvailableCommandsSpy.mockRejectedValueOnce(
new Error('Command discovery failed'),
);
await expect(
session.sendAvailableCommandsUpdate(),
).resolves.toBeUndefined();
expect(mockClient.sessionUpdate).not.toHaveBeenCalled();
expect(consoleErrorSpy).toHaveBeenCalled();
consoleErrorSpy.mockRestore();
});
});
});

View File

@@ -52,8 +52,6 @@ import type {
AvailableCommandsUpdate,
SetModeRequest,
SetModeResponse,
SetModelRequest,
SetModelResponse,
ApprovalModeValue,
CurrentModeUpdate,
} from '../schema.js';
@@ -350,31 +348,6 @@ export class Session implements SessionContext {
return { modeId: params.modeId };
}
/**
* Sets the model for the current session.
* Validates the model ID and switches the model via Config.
*/
async setModel(params: SetModelRequest): Promise<SetModelResponse> {
const modelId = params.modelId.trim();
if (!modelId) {
throw acp.RequestError.invalidParams('modelId cannot be empty');
}
// Attempt to set the model using config
await this.config.setModel(modelId, {
reason: 'user_request_acp',
context: 'session/set_model',
});
// Get updated model info
const currentModel = this.config.getModel();
return {
modelId: currentModel,
};
}
/**
* Sends a current_mode_update notification to the client.
* Called after the agent switches modes (e.g., from exit_plan_mode tool).

View File

@@ -1196,6 +1196,11 @@ describe('Hierarchical Memory Loading (config.ts) - Placeholder Suite', () => {
],
true,
'tree',
{
respectGitIgnore: false,
respectQwenIgnore: true,
},
undefined, // maxDirs
);
});

View File

@@ -9,6 +9,7 @@ import {
AuthType,
Config,
DEFAULT_QWEN_EMBEDDING_MODEL,
DEFAULT_MEMORY_FILE_FILTERING_OPTIONS,
FileDiscoveryService,
getCurrentGeminiMdFilename,
loadServerHierarchicalMemory,
@@ -21,6 +22,7 @@ import {
isToolEnabled,
SessionService,
type ResumedSessionData,
type FileFilteringOptions,
type MCPServerConfig,
type ToolName,
EditTool,
@@ -332,14 +334,7 @@ export async function parseArguments(settings: Settings): Promise<CliArgs> {
.option('experimental-skills', {
type: 'boolean',
description: 'Enable experimental Skills feature',
default: (() => {
const legacySkills = (
settings as Settings & {
tools?: { experimental?: { skills?: boolean } };
}
).tools?.experimental?.skills;
return settings.experimental?.skills ?? legacySkills ?? false;
})(),
default: false,
})
.option('channel', {
type: 'string',
@@ -648,6 +643,7 @@ export async function loadHierarchicalGeminiMemory(
extensionContextFilePaths: string[] = [],
folderTrust: boolean,
memoryImportFormat: 'flat' | 'tree' = 'tree',
fileFilteringOptions?: FileFilteringOptions,
): Promise<{ memoryContent: string; fileCount: number }> {
// FIX: Use real, canonical paths for a reliable comparison to handle symlinks.
const realCwd = fs.realpathSync(path.resolve(currentWorkingDirectory));
@@ -673,6 +669,8 @@ export async function loadHierarchicalGeminiMemory(
extensionContextFilePaths,
folderTrust,
memoryImportFormat,
fileFilteringOptions,
settings.context?.discoveryMaxDirs,
);
}
@@ -742,6 +740,11 @@ export async function loadCliConfig(
const fileService = new FileDiscoveryService(cwd);
const fileFiltering = {
...DEFAULT_MEMORY_FILE_FILTERING_OPTIONS,
...settings.context?.fileFiltering,
};
const includeDirectories = (settings.context?.includeDirectories || [])
.map(resolvePath)
.concat((argv.includeDirectories || []).map(resolvePath));
@@ -758,6 +761,7 @@ export async function loadCliConfig(
extensionContextFilePaths,
trustedFolder,
memoryImportFormat,
fileFiltering,
);
let mcpServers = mergeMcpServers(settings, activeExtensions);

View File

@@ -106,6 +106,7 @@ const MIGRATION_MAP: Record<string, string> = {
mcpServers: 'mcpServers',
mcpServerCommand: 'mcp.serverCommand',
memoryImportFormat: 'context.importFormat',
memoryDiscoveryMaxDirs: 'context.discoveryMaxDirs',
model: 'model.name',
preferredEditor: 'general.preferredEditor',
sandbox: 'tools.sandbox',
@@ -921,21 +922,6 @@ export function migrateDeprecatedSettings(
loadedSettings.setValue(scope, 'extensions', newExtensionsValue);
}
const legacySkills = (
settings as Settings & {
tools?: { experimental?: { skills?: boolean } };
}
).tools?.experimental?.skills;
if (
legacySkills !== undefined &&
settings.experimental?.skills === undefined
) {
console.log(
`Migrating deprecated tools.experimental.skills setting from ${scope} settings...`,
);
loadedSettings.setValue(scope, 'experimental.skills', legacySkills);
}
};
processScope(SettingScope.User);

View File

@@ -722,6 +722,15 @@ const SETTINGS_SCHEMA = {
description: 'The format to use when importing memory.',
showInDialog: false,
},
discoveryMaxDirs: {
type: 'number',
label: 'Memory Discovery Max Dirs',
category: 'Context',
requiresRestart: false,
default: 200,
description: 'Maximum number of directories to search for memory.',
showInDialog: true,
},
includeDirectories: {
type: 'array',
label: 'Include Directories',
@@ -1198,16 +1207,6 @@ const SETTINGS_SCHEMA = {
description: 'Setting to enable experimental features',
showInDialog: false,
properties: {
skills: {
type: 'boolean',
label: 'Skills',
category: 'Experimental',
requiresRestart: true,
default: false,
description:
'Enable experimental Agent Skills feature. When enabled, Qwen Code can use Skills from .qwen/skills/ and ~/.qwen/skills/.',
showInDialog: true,
},
extensionManagement: {
type: 'boolean',
label: 'Extension Management',

View File

@@ -873,11 +873,11 @@ export default {
'Session Stats': '会话统计',
'Model Usage': '模型使用情况',
Reqs: '请求数',
'Input Tokens': '输入 token 数',
'Output Tokens': '输出 token 数',
'Input Tokens': '输入令牌',
'Output Tokens': '输出令牌',
'Savings Highlight:': '节省亮点:',
'of input tokens were served from the cache, reducing costs.':
'从缓存载入 token ,降低了成本',
'的输入令牌来自缓存,降低了成本',
'Tip: For a full token breakdown, run `/stats model`.':
'提示:要查看完整的令牌明细,请运行 `/stats model`',
'Model Stats For Nerds': '模型统计(技术细节)',

View File

@@ -575,6 +575,7 @@ export const AppContainer = (props: AppContainerProps) => {
config.getExtensionContextFilePaths(),
config.isTrustedFolder(),
settings.merged.context?.importFormat || 'tree', // Use setting or default to 'tree'
config.getFileFilteringOptions(),
);
config.setUserMemory(memoryContent);

View File

@@ -54,7 +54,9 @@ describe('directoryCommand', () => {
services: {
config: mockConfig,
settings: {
merged: {},
merged: {
memoryDiscoveryMaxDirs: 1000,
},
},
},
ui: {

View File

@@ -119,6 +119,8 @@ export const directoryCommand: SlashCommand = {
config.getFolderTrust(),
context.services.settings.merged.context?.importFormat ||
'tree', // Use setting or default to 'tree'
config.getFileFilteringOptions(),
context.services.settings.merged.context?.discoveryMaxDirs,
);
config.setUserMemory(memoryContent);
config.setGeminiMdFileCount(fileCount);

View File

@@ -299,7 +299,9 @@ describe('memoryCommand', () => {
services: {
config: mockConfig,
settings: {
merged: {},
merged: {
memoryDiscoveryMaxDirs: 1000,
},
} as LoadedSettings,
},
});

View File

@@ -315,6 +315,8 @@ export const memoryCommand: SlashCommand = {
config.getFolderTrust(),
context.services.settings.merged.context?.importFormat ||
'tree', // Use setting or default to 'tree'
config.getFileFilteringOptions(),
context.services.settings.merged.context?.discoveryMaxDirs,
);
config.setUserMemory(memoryContent);
config.setGeminiMdFileCount(fileCount);

View File

@@ -1331,7 +1331,9 @@ describe('SettingsDialog', () => {
truncateToolOutputThreshold: 50000,
truncateToolOutputLines: 1000,
},
context: {},
context: {
discoveryMaxDirs: 500,
},
model: {
maxSessionTurns: 100,
skipNextSpeakerCheck: false,
@@ -1464,6 +1466,7 @@ describe('SettingsDialog', () => {
disableFuzzySearch: true,
},
loadMemoryFromIncludeDirectories: true,
discoveryMaxDirs: 100,
},
});
const onSelect = vi.fn();

View File

@@ -8,10 +8,7 @@ import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
import * as fs from 'node:fs';
import * as path from 'node:path';
import * as os from 'node:os';
import {
updateSettingsFilePreservingFormat,
applyUpdates,
} from './commentJson.js';
import { updateSettingsFilePreservingFormat } from './commentJson.js';
describe('commentJson', () => {
let tempDir: string;
@@ -183,18 +180,3 @@ describe('commentJson', () => {
});
});
});
describe('applyUpdates', () => {
it('should apply updates correctly', () => {
const original = { a: 1, b: { c: 2 } };
const updates = { b: { c: 3 } };
const result = applyUpdates(original, updates);
expect(result).toEqual({ a: 1, b: { c: 3 } });
});
it('should apply updates correctly when empty', () => {
const original = { a: 1, b: { c: 2 } };
const updates = { b: {} };
const result = applyUpdates(original, updates);
expect(result).toEqual({ a: 1, b: {} });
});
});

View File

@@ -38,7 +38,7 @@ export function updateSettingsFilePreservingFormat(
fs.writeFileSync(filePath, updatedContent, 'utf-8');
}
export function applyUpdates(
function applyUpdates(
current: Record<string, unknown>,
updates: Record<string, unknown>,
): Record<string, unknown> {
@@ -50,7 +50,6 @@ export function applyUpdates(
typeof value === 'object' &&
value !== null &&
!Array.isArray(value) &&
Object.keys(value).length > 0 &&
typeof result[key] === 'object' &&
result[key] !== null &&
!Array.isArray(result[key])

View File

@@ -1,6 +1,6 @@
{
"name": "@qwen-code/qwen-code-core",
"version": "0.7.2-preview.0",
"version": "0.7.1",
"description": "Qwen Code Core",
"repository": {
"type": "git",

View File

@@ -28,6 +28,7 @@ type RawMessageStreamEvent = Anthropic.RawMessageStreamEvent;
import { getDefaultTokenizer } from '../../utils/request-tokenizer/index.js';
import { safeJsonParse } from '../../utils/safeJsonParse.js';
import { AnthropicContentConverter } from './converter.js';
import { buildRuntimeFetchOptions } from '../../utils/runtimeFetchOptions.js';
type StreamingBlockState = {
type: string;
@@ -54,6 +55,9 @@ export class AnthropicContentGenerator implements ContentGenerator {
) {
const defaultHeaders = this.buildHeaders();
const baseURL = contentGeneratorConfig.baseUrl;
// Configure runtime options to ensure user-configured timeout works as expected
// bodyTimeout is always disabled (0) to let Anthropic SDK timeout control the request
const runtimeOptions = buildRuntimeFetchOptions('anthropic');
this.client = new Anthropic({
apiKey: contentGeneratorConfig.apiKey,
@@ -61,6 +65,7 @@ export class AnthropicContentGenerator implements ContentGenerator {
timeout: contentGeneratorConfig.timeout,
maxRetries: contentGeneratorConfig.maxRetries,
defaultHeaders,
...runtimeOptions,
});
this.converter = new AnthropicContentConverter(

View File

@@ -19,6 +19,8 @@ import type { ContentGeneratorConfig } from '../../contentGenerator.js';
import { AuthType } from '../../contentGenerator.js';
import type { ChatCompletionToolWithCache } from './types.js';
import { DEFAULT_TIMEOUT, DEFAULT_MAX_RETRIES } from '../constants.js';
import { buildRuntimeFetchOptions } from '../../../utils/runtimeFetchOptions.js';
import type { OpenAIRuntimeFetchOptions } from '../../../utils/runtimeFetchOptions.js';
// Mock OpenAI
vi.mock('openai', () => ({
@@ -32,6 +34,10 @@ vi.mock('openai', () => ({
})),
}));
vi.mock('../../../utils/runtimeFetchOptions.js', () => ({
buildRuntimeFetchOptions: vi.fn(),
}));
describe('DashScopeOpenAICompatibleProvider', () => {
let provider: DashScopeOpenAICompatibleProvider;
let mockContentGeneratorConfig: ContentGeneratorConfig;
@@ -39,6 +45,11 @@ describe('DashScopeOpenAICompatibleProvider', () => {
beforeEach(() => {
vi.clearAllMocks();
const mockedBuildRuntimeFetchOptions =
buildRuntimeFetchOptions as unknown as MockedFunction<
(sdkType: 'openai') => OpenAIRuntimeFetchOptions
>;
mockedBuildRuntimeFetchOptions.mockReturnValue(undefined);
// Mock ContentGeneratorConfig
mockContentGeneratorConfig = {
@@ -185,18 +196,20 @@ describe('DashScopeOpenAICompatibleProvider', () => {
it('should create OpenAI client with DashScope configuration', () => {
const client = provider.buildClient();
expect(OpenAI).toHaveBeenCalledWith({
apiKey: 'test-api-key',
baseURL: 'https://dashscope.aliyuncs.com/compatible-mode/v1',
timeout: 60000,
maxRetries: 2,
defaultHeaders: {
'User-Agent': `QwenCode/1.0.0 (${process.platform}; ${process.arch})`,
'X-DashScope-CacheControl': 'enable',
'X-DashScope-UserAgent': `QwenCode/1.0.0 (${process.platform}; ${process.arch})`,
'X-DashScope-AuthType': AuthType.QWEN_OAUTH,
},
});
expect(OpenAI).toHaveBeenCalledWith(
expect.objectContaining({
apiKey: 'test-api-key',
baseURL: 'https://dashscope.aliyuncs.com/compatible-mode/v1',
timeout: 60000,
maxRetries: 2,
defaultHeaders: {
'User-Agent': `QwenCode/1.0.0 (${process.platform}; ${process.arch})`,
'X-DashScope-CacheControl': 'enable',
'X-DashScope-UserAgent': `QwenCode/1.0.0 (${process.platform}; ${process.arch})`,
'X-DashScope-AuthType': AuthType.QWEN_OAUTH,
},
}),
);
expect(client).toBeDefined();
});
@@ -207,13 +220,15 @@ describe('DashScopeOpenAICompatibleProvider', () => {
provider.buildClient();
expect(OpenAI).toHaveBeenCalledWith({
apiKey: 'test-api-key',
baseURL: 'https://dashscope.aliyuncs.com/compatible-mode/v1',
timeout: DEFAULT_TIMEOUT,
maxRetries: DEFAULT_MAX_RETRIES,
defaultHeaders: expect.any(Object),
});
expect(OpenAI).toHaveBeenCalledWith(
expect.objectContaining({
apiKey: 'test-api-key',
baseURL: 'https://dashscope.aliyuncs.com/compatible-mode/v1',
timeout: DEFAULT_TIMEOUT,
maxRetries: DEFAULT_MAX_RETRIES,
defaultHeaders: expect.any(Object),
}),
);
});
});

View File

@@ -16,6 +16,7 @@ import type {
ChatCompletionContentPartWithCache,
ChatCompletionToolWithCache,
} from './types.js';
import { buildRuntimeFetchOptions } from '../../../utils/runtimeFetchOptions.js';
export class DashScopeOpenAICompatibleProvider
implements OpenAICompatibleProvider
@@ -68,12 +69,16 @@ export class DashScopeOpenAICompatibleProvider
maxRetries = DEFAULT_MAX_RETRIES,
} = this.contentGeneratorConfig;
const defaultHeaders = this.buildHeaders();
// Configure fetch options to ensure user-configured timeout works as expected
// bodyTimeout is always disabled (0) to let OpenAI SDK timeout control the request
const fetchOptions = buildRuntimeFetchOptions('openai');
return new OpenAI({
apiKey,
baseURL: baseUrl,
timeout,
maxRetries,
defaultHeaders,
...(fetchOptions ? { fetchOptions } : {}),
});
}

View File

@@ -17,6 +17,8 @@ import { DefaultOpenAICompatibleProvider } from './default.js';
import type { Config } from '../../../config/config.js';
import type { ContentGeneratorConfig } from '../../contentGenerator.js';
import { DEFAULT_TIMEOUT, DEFAULT_MAX_RETRIES } from '../constants.js';
import { buildRuntimeFetchOptions } from '../../../utils/runtimeFetchOptions.js';
import type { OpenAIRuntimeFetchOptions } from '../../../utils/runtimeFetchOptions.js';
// Mock OpenAI
vi.mock('openai', () => ({
@@ -30,6 +32,10 @@ vi.mock('openai', () => ({
})),
}));
vi.mock('../../../utils/runtimeFetchOptions.js', () => ({
buildRuntimeFetchOptions: vi.fn(),
}));
describe('DefaultOpenAICompatibleProvider', () => {
let provider: DefaultOpenAICompatibleProvider;
let mockContentGeneratorConfig: ContentGeneratorConfig;
@@ -37,6 +43,11 @@ describe('DefaultOpenAICompatibleProvider', () => {
beforeEach(() => {
vi.clearAllMocks();
const mockedBuildRuntimeFetchOptions =
buildRuntimeFetchOptions as unknown as MockedFunction<
(sdkType: 'openai') => OpenAIRuntimeFetchOptions
>;
mockedBuildRuntimeFetchOptions.mockReturnValue(undefined);
// Mock ContentGeneratorConfig
mockContentGeneratorConfig = {
@@ -112,15 +123,17 @@ describe('DefaultOpenAICompatibleProvider', () => {
it('should create OpenAI client with correct configuration', () => {
const client = provider.buildClient();
expect(OpenAI).toHaveBeenCalledWith({
apiKey: 'test-api-key',
baseURL: 'https://api.openai.com/v1',
timeout: 60000,
maxRetries: 2,
defaultHeaders: {
'User-Agent': `QwenCode/1.0.0 (${process.platform}; ${process.arch})`,
},
});
expect(OpenAI).toHaveBeenCalledWith(
expect.objectContaining({
apiKey: 'test-api-key',
baseURL: 'https://api.openai.com/v1',
timeout: 60000,
maxRetries: 2,
defaultHeaders: {
'User-Agent': `QwenCode/1.0.0 (${process.platform}; ${process.arch})`,
},
}),
);
expect(client).toBeDefined();
});
@@ -131,15 +144,17 @@ describe('DefaultOpenAICompatibleProvider', () => {
provider.buildClient();
expect(OpenAI).toHaveBeenCalledWith({
apiKey: 'test-api-key',
baseURL: 'https://api.openai.com/v1',
timeout: DEFAULT_TIMEOUT,
maxRetries: DEFAULT_MAX_RETRIES,
defaultHeaders: {
'User-Agent': `QwenCode/1.0.0 (${process.platform}; ${process.arch})`,
},
});
expect(OpenAI).toHaveBeenCalledWith(
expect.objectContaining({
apiKey: 'test-api-key',
baseURL: 'https://api.openai.com/v1',
timeout: DEFAULT_TIMEOUT,
maxRetries: DEFAULT_MAX_RETRIES,
defaultHeaders: {
'User-Agent': `QwenCode/1.0.0 (${process.platform}; ${process.arch})`,
},
}),
);
});
it('should include custom headers from buildHeaders', () => {

View File

@@ -4,6 +4,7 @@ import type { Config } from '../../../config/config.js';
import type { ContentGeneratorConfig } from '../../contentGenerator.js';
import { DEFAULT_TIMEOUT, DEFAULT_MAX_RETRIES } from '../constants.js';
import type { OpenAICompatibleProvider } from './types.js';
import { buildRuntimeFetchOptions } from '../../../utils/runtimeFetchOptions.js';
/**
* Default provider for standard OpenAI-compatible APIs
@@ -43,12 +44,16 @@ export class DefaultOpenAICompatibleProvider
maxRetries = DEFAULT_MAX_RETRIES,
} = this.contentGeneratorConfig;
const defaultHeaders = this.buildHeaders();
// Configure fetch options to ensure user-configured timeout works as expected
// bodyTimeout is always disabled (0) to let OpenAI SDK timeout control the request
const fetchOptions = buildRuntimeFetchOptions('openai');
return new OpenAI({
apiKey,
baseURL: baseUrl,
timeout,
maxRetries,
defaultHeaders,
...(fetchOptions ? { fetchOptions } : {}),
});
}

View File

@@ -112,62 +112,6 @@ You are a helpful assistant with this skill.
expect(config.filePath).toBe(validSkillConfig.filePath);
});
it('should parse markdown with CRLF line endings', () => {
const markdownCrlf = `---\r
name: test-skill\r
description: A test skill\r
---\r
\r
You are a helpful assistant with this skill.\r
`;
const config = manager.parseSkillContent(
markdownCrlf,
validSkillConfig.filePath,
'project',
);
expect(config.name).toBe('test-skill');
expect(config.description).toBe('A test skill');
expect(config.body).toBe('You are a helpful assistant with this skill.');
});
it('should parse markdown with UTF-8 BOM', () => {
const markdownWithBom = `\uFEFF---
name: test-skill
description: A test skill
---
You are a helpful assistant with this skill.
`;
const config = manager.parseSkillContent(
markdownWithBom,
validSkillConfig.filePath,
'project',
);
expect(config.name).toBe('test-skill');
expect(config.description).toBe('A test skill');
});
it('should parse markdown when body is empty and file ends after frontmatter', () => {
const frontmatterOnly = `---
name: test-skill
description: A test skill
---`;
const config = manager.parseSkillContent(
frontmatterOnly,
validSkillConfig.filePath,
'project',
);
expect(config.name).toBe('test-skill');
expect(config.description).toBe('A test skill');
expect(config.body).toBe('');
});
it('should parse content with allowedTools', () => {
const markdownWithTools = `---
name: test-skill

View File

@@ -307,11 +307,9 @@ export class SkillManager {
level: SkillLevel,
): SkillConfig {
try {
const normalizedContent = normalizeSkillFileContent(content);
// Split frontmatter and content
const frontmatterRegex = /^---\n([\s\S]*?)\n---(?:\n|$)([\s\S]*)$/;
const match = normalizedContent.match(frontmatterRegex);
const frontmatterRegex = /^---\n([\s\S]*?)\n---\n([\s\S]*)$/;
const match = content.match(frontmatterRegex);
if (!match) {
throw new Error('Invalid format: missing YAML frontmatter');
@@ -558,13 +556,3 @@ export class SkillManager {
}
}
}
function normalizeSkillFileContent(content: string): string {
// Strip UTF-8 BOM to ensure frontmatter starts at the first character.
let normalized = content.replace(/^\uFEFF/, '');
// Normalize line endings so skills authored on Windows (CRLF) parse correctly.
normalized = normalized.replace(/\r\n/g, '\n').replace(/\r/g, '\n');
return normalized;
}

View File

@@ -0,0 +1,232 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import * as fsPromises from 'node:fs/promises';
import * as path from 'node:path';
import * as os from 'node:os';
import { bfsFileSearch } from './bfsFileSearch.js';
import { FileDiscoveryService } from '../services/fileDiscoveryService.js';
describe('bfsFileSearch', () => {
let testRootDir: string;
async function createEmptyDir(...pathSegments: string[]) {
const fullPath = path.join(testRootDir, ...pathSegments);
await fsPromises.mkdir(fullPath, { recursive: true });
return fullPath;
}
async function createTestFile(content: string, ...pathSegments: string[]) {
const fullPath = path.join(testRootDir, ...pathSegments);
await fsPromises.mkdir(path.dirname(fullPath), { recursive: true });
await fsPromises.writeFile(fullPath, content);
return fullPath;
}
beforeEach(async () => {
testRootDir = await fsPromises.mkdtemp(
path.join(os.tmpdir(), 'bfs-file-search-test-'),
);
});
afterEach(async () => {
await fsPromises.rm(testRootDir, { recursive: true, force: true });
});
it('should find a file in the root directory', async () => {
const targetFilePath = await createTestFile('content', 'target.txt');
const result = await bfsFileSearch(testRootDir, { fileName: 'target.txt' });
expect(result).toEqual([targetFilePath]);
});
it('should find a file in a nested directory', async () => {
const targetFilePath = await createTestFile(
'content',
'a',
'b',
'target.txt',
);
const result = await bfsFileSearch(testRootDir, { fileName: 'target.txt' });
expect(result).toEqual([targetFilePath]);
});
it('should find multiple files with the same name', async () => {
const targetFilePath1 = await createTestFile('content1', 'a', 'target.txt');
const targetFilePath2 = await createTestFile('content2', 'b', 'target.txt');
const result = await bfsFileSearch(testRootDir, { fileName: 'target.txt' });
result.sort();
expect(result).toEqual([targetFilePath1, targetFilePath2].sort());
});
it('should return an empty array if no file is found', async () => {
await createTestFile('content', 'other.txt');
const result = await bfsFileSearch(testRootDir, { fileName: 'target.txt' });
expect(result).toEqual([]);
});
it('should ignore directories specified in ignoreDirs', async () => {
await createTestFile('content', 'ignored', 'target.txt');
const targetFilePath = await createTestFile(
'content',
'not-ignored',
'target.txt',
);
const result = await bfsFileSearch(testRootDir, {
fileName: 'target.txt',
ignoreDirs: ['ignored'],
});
expect(result).toEqual([targetFilePath]);
});
it('should respect the maxDirs limit and not find the file', async () => {
await createTestFile('content', 'a', 'b', 'c', 'target.txt');
const result = await bfsFileSearch(testRootDir, {
fileName: 'target.txt',
maxDirs: 3,
});
expect(result).toEqual([]);
});
it('should respect the maxDirs limit and find the file', async () => {
const targetFilePath = await createTestFile(
'content',
'a',
'b',
'c',
'target.txt',
);
const result = await bfsFileSearch(testRootDir, {
fileName: 'target.txt',
maxDirs: 4,
});
expect(result).toEqual([targetFilePath]);
});
describe('with FileDiscoveryService', () => {
let projectRoot: string;
beforeEach(async () => {
projectRoot = await createEmptyDir('project');
});
it('should ignore gitignored files', async () => {
await createEmptyDir('project', '.git');
await createTestFile('node_modules/', 'project', '.gitignore');
await createTestFile('content', 'project', 'node_modules', 'target.txt');
const targetFilePath = await createTestFile(
'content',
'project',
'not-ignored',
'target.txt',
);
const fileService = new FileDiscoveryService(projectRoot);
const result = await bfsFileSearch(projectRoot, {
fileName: 'target.txt',
fileService,
fileFilteringOptions: {
respectGitIgnore: true,
respectQwenIgnore: true,
},
});
expect(result).toEqual([targetFilePath]);
});
it('should ignore qwenignored files', async () => {
await createTestFile('node_modules/', 'project', '.qwenignore');
await createTestFile('content', 'project', 'node_modules', 'target.txt');
const targetFilePath = await createTestFile(
'content',
'project',
'not-ignored',
'target.txt',
);
const fileService = new FileDiscoveryService(projectRoot);
const result = await bfsFileSearch(projectRoot, {
fileName: 'target.txt',
fileService,
fileFilteringOptions: {
respectGitIgnore: false,
respectQwenIgnore: true,
},
});
expect(result).toEqual([targetFilePath]);
});
it('should not ignore files if respect flags are false', async () => {
await createEmptyDir('project', '.git');
await createTestFile('node_modules/', 'project', '.gitignore');
const target1 = await createTestFile(
'content',
'project',
'node_modules',
'target.txt',
);
const target2 = await createTestFile(
'content',
'project',
'not-ignored',
'target.txt',
);
const fileService = new FileDiscoveryService(projectRoot);
const result = await bfsFileSearch(projectRoot, {
fileName: 'target.txt',
fileService,
fileFilteringOptions: {
respectGitIgnore: false,
respectQwenIgnore: false,
},
});
expect(result.sort()).toEqual([target1, target2].sort());
});
});
it('should find all files in a complex directory structure', async () => {
// Create a complex directory structure to test correctness at scale
// without flaky performance checks.
const numDirs = 50;
const numFilesPerDir = 2;
const numTargetDirs = 10;
const dirCreationPromises: Array<Promise<unknown>> = [];
for (let i = 0; i < numDirs; i++) {
dirCreationPromises.push(createEmptyDir(`dir${i}`));
dirCreationPromises.push(createEmptyDir(`dir${i}`, 'subdir1'));
dirCreationPromises.push(createEmptyDir(`dir${i}`, 'subdir2'));
dirCreationPromises.push(createEmptyDir(`dir${i}`, 'subdir1', 'deep'));
}
await Promise.all(dirCreationPromises);
const fileCreationPromises: Array<Promise<string>> = [];
for (let i = 0; i < numTargetDirs; i++) {
// Add target files in some directories
fileCreationPromises.push(
createTestFile('content', `dir${i}`, 'QWEN.md'),
);
fileCreationPromises.push(
createTestFile('content', `dir${i}`, 'subdir1', 'QWEN.md'),
);
}
const expectedFiles = await Promise.all(fileCreationPromises);
const result = await bfsFileSearch(testRootDir, {
fileName: 'QWEN.md',
// Provide a generous maxDirs limit to ensure it doesn't prematurely stop
// in this large test case. Total dirs created is 200.
maxDirs: 250,
});
// Verify we found the exact files we created
expect(result.length).toBe(numTargetDirs * numFilesPerDir);
expect(result.sort()).toEqual(expectedFiles.sort());
});
});

View File

@@ -0,0 +1,131 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import * as fs from 'node:fs/promises';
import * as path from 'node:path';
import type { FileDiscoveryService } from '../services/fileDiscoveryService.js';
import type { FileFilteringOptions } from '../config/constants.js';
// Simple console logger for now.
// TODO: Integrate with a more robust server-side logger.
const logger = {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
debug: (...args: any[]) => console.debug('[DEBUG] [BfsFileSearch]', ...args),
};
interface BfsFileSearchOptions {
fileName: string;
ignoreDirs?: string[];
maxDirs?: number;
debug?: boolean;
fileService?: FileDiscoveryService;
fileFilteringOptions?: FileFilteringOptions;
}
/**
* Performs a breadth-first search for a specific file within a directory structure.
*
* @param rootDir The directory to start the search from.
* @param options Configuration for the search.
* @returns A promise that resolves to an array of paths where the file was found.
*/
export async function bfsFileSearch(
rootDir: string,
options: BfsFileSearchOptions,
): Promise<string[]> {
const {
fileName,
ignoreDirs = [],
maxDirs = Infinity,
debug = false,
fileService,
} = options;
const foundFiles: string[] = [];
const queue: string[] = [rootDir];
const visited = new Set<string>();
let scannedDirCount = 0;
let queueHead = 0; // Pointer-based queue head to avoid expensive splice operations
// Convert ignoreDirs array to Set for O(1) lookup performance
const ignoreDirsSet = new Set(ignoreDirs);
// Process directories in parallel batches for maximum performance
const PARALLEL_BATCH_SIZE = 15; // Parallel processing batch size for optimal performance
while (queueHead < queue.length && scannedDirCount < maxDirs) {
// Fill batch with unvisited directories up to the desired size
const batchSize = Math.min(PARALLEL_BATCH_SIZE, maxDirs - scannedDirCount);
const currentBatch = [];
while (currentBatch.length < batchSize && queueHead < queue.length) {
const currentDir = queue[queueHead];
queueHead++;
if (!visited.has(currentDir)) {
visited.add(currentDir);
currentBatch.push(currentDir);
}
}
scannedDirCount += currentBatch.length;
if (currentBatch.length === 0) continue;
if (debug) {
logger.debug(
`Scanning [${scannedDirCount}/${maxDirs}]: batch of ${currentBatch.length}`,
);
}
// Read directories in parallel instead of one by one
const readPromises = currentBatch.map(async (currentDir) => {
try {
const entries = await fs.readdir(currentDir, { withFileTypes: true });
return { currentDir, entries };
} catch (error) {
// Warn user that a directory could not be read, as this affects search results.
const message = (error as Error)?.message ?? 'Unknown error';
console.warn(
`[WARN] Skipping unreadable directory: ${currentDir} (${message})`,
);
if (debug) {
logger.debug(`Full error for ${currentDir}:`, error);
}
return { currentDir, entries: [] };
}
});
const results = await Promise.all(readPromises);
for (const { currentDir, entries } of results) {
for (const entry of entries) {
const fullPath = path.join(currentDir, entry.name);
const isDirectory = entry.isDirectory();
const isMatchingFile = entry.isFile() && entry.name === fileName;
if (!isDirectory && !isMatchingFile) {
continue;
}
if (isDirectory && ignoreDirsSet.has(entry.name)) {
continue;
}
if (
fileService?.shouldIgnoreFile(fullPath, {
respectGitIgnore: options.fileFilteringOptions?.respectGitIgnore,
respectQwenIgnore: options.fileFilteringOptions?.respectQwenIgnore,
})
) {
continue;
}
if (isDirectory) {
queue.push(fullPath);
} else {
foundFiles.push(fullPath);
}
}
}
}
return foundFiles;
}

View File

@@ -209,7 +209,7 @@ describe('loadServerHierarchicalMemory', () => {
});
});
it('should load context files from CWD with custom filename (not subdirectories)', async () => {
it('should load context files by downward traversal with custom filename', async () => {
const customFilename = 'LOCAL_CONTEXT.md';
setGeminiMdFilename(customFilename);
@@ -228,10 +228,9 @@ describe('loadServerHierarchicalMemory', () => {
DEFAULT_FOLDER_TRUST,
);
// Only upward traversal is performed, subdirectory files are not loaded
expect(result).toEqual({
memoryContent: `--- Context from: ${customFilename} ---\nCWD custom memory\n--- End of Context from: ${customFilename} ---`,
fileCount: 1,
memoryContent: `--- Context from: ${customFilename} ---\nCWD custom memory\n--- End of Context from: ${customFilename} ---\n\n--- Context from: ${path.join('subdir', customFilename)} ---\nSubdir custom memory\n--- End of Context from: ${path.join('subdir', customFilename)} ---`,
fileCount: 2,
});
});
@@ -260,7 +259,7 @@ describe('loadServerHierarchicalMemory', () => {
});
});
it('should only load context files from CWD, not subdirectories', async () => {
it('should load ORIGINAL_GEMINI_MD_FILENAME files by downward traversal from CWD', async () => {
await createTestFile(
path.join(cwd, 'subdir', DEFAULT_CONTEXT_FILENAME),
'Subdir memory',
@@ -279,14 +278,13 @@ describe('loadServerHierarchicalMemory', () => {
DEFAULT_FOLDER_TRUST,
);
// Subdirectory files are not loaded, only CWD and upward
expect(result).toEqual({
memoryContent: `--- Context from: ${DEFAULT_CONTEXT_FILENAME} ---\nCWD memory\n--- End of Context from: ${DEFAULT_CONTEXT_FILENAME} ---`,
fileCount: 1,
memoryContent: `--- Context from: ${DEFAULT_CONTEXT_FILENAME} ---\nCWD memory\n--- End of Context from: ${DEFAULT_CONTEXT_FILENAME} ---\n\n--- Context from: ${path.join('subdir', DEFAULT_CONTEXT_FILENAME)} ---\nSubdir memory\n--- End of Context from: ${path.join('subdir', DEFAULT_CONTEXT_FILENAME)} ---`,
fileCount: 2,
});
});
it('should load and correctly order global and upward context files', async () => {
it('should load and correctly order global, upward, and downward ORIGINAL_GEMINI_MD_FILENAME files', async () => {
const defaultContextFile = await createTestFile(
path.join(homedir, QWEN_DIR, DEFAULT_CONTEXT_FILENAME),
'default context content',
@@ -303,7 +301,7 @@ describe('loadServerHierarchicalMemory', () => {
path.join(cwd, DEFAULT_CONTEXT_FILENAME),
'CWD memory',
);
await createTestFile(
const subDirGeminiFile = await createTestFile(
path.join(cwd, 'sub', DEFAULT_CONTEXT_FILENAME),
'Subdir memory',
);
@@ -317,10 +315,92 @@ describe('loadServerHierarchicalMemory', () => {
DEFAULT_FOLDER_TRUST,
);
// Subdirectory files are not loaded, only global and upward from CWD
expect(result).toEqual({
memoryContent: `--- Context from: ${path.relative(cwd, defaultContextFile)} ---\ndefault context content\n--- End of Context from: ${path.relative(cwd, defaultContextFile)} ---\n\n--- Context from: ${path.relative(cwd, rootGeminiFile)} ---\nProject parent memory\n--- End of Context from: ${path.relative(cwd, rootGeminiFile)} ---\n\n--- Context from: ${path.relative(cwd, projectRootGeminiFile)} ---\nProject root memory\n--- End of Context from: ${path.relative(cwd, projectRootGeminiFile)} ---\n\n--- Context from: ${path.relative(cwd, cwdGeminiFile)} ---\nCWD memory\n--- End of Context from: ${path.relative(cwd, cwdGeminiFile)} ---`,
fileCount: 4,
memoryContent: `--- Context from: ${path.relative(cwd, defaultContextFile)} ---\ndefault context content\n--- End of Context from: ${path.relative(cwd, defaultContextFile)} ---\n\n--- Context from: ${path.relative(cwd, rootGeminiFile)} ---\nProject parent memory\n--- End of Context from: ${path.relative(cwd, rootGeminiFile)} ---\n\n--- Context from: ${path.relative(cwd, projectRootGeminiFile)} ---\nProject root memory\n--- End of Context from: ${path.relative(cwd, projectRootGeminiFile)} ---\n\n--- Context from: ${path.relative(cwd, cwdGeminiFile)} ---\nCWD memory\n--- End of Context from: ${path.relative(cwd, cwdGeminiFile)} ---\n\n--- Context from: ${path.relative(cwd, subDirGeminiFile)} ---\nSubdir memory\n--- End of Context from: ${path.relative(cwd, subDirGeminiFile)} ---`,
fileCount: 5,
});
});
it('should ignore specified directories during downward scan', async () => {
await createEmptyDir(path.join(projectRoot, '.git'));
await createTestFile(path.join(projectRoot, '.gitignore'), 'node_modules');
await createTestFile(
path.join(cwd, 'node_modules', DEFAULT_CONTEXT_FILENAME),
'Ignored memory',
);
const regularSubDirGeminiFile = await createTestFile(
path.join(cwd, 'my_code', DEFAULT_CONTEXT_FILENAME),
'My code memory',
);
const result = await loadServerHierarchicalMemory(
cwd,
[],
false,
new FileDiscoveryService(projectRoot),
[],
DEFAULT_FOLDER_TRUST,
'tree',
{
respectGitIgnore: true,
respectQwenIgnore: true,
},
200, // maxDirs parameter
);
expect(result).toEqual({
memoryContent: `--- Context from: ${path.relative(cwd, regularSubDirGeminiFile)} ---\nMy code memory\n--- End of Context from: ${path.relative(cwd, regularSubDirGeminiFile)} ---`,
fileCount: 1,
});
});
it('should respect the maxDirs parameter during downward scan', async () => {
const consoleDebugSpy = vi
.spyOn(console, 'debug')
.mockImplementation(() => {});
// Create directories in parallel for better performance
const dirPromises = Array.from({ length: 2 }, (_, i) =>
createEmptyDir(path.join(cwd, `deep_dir_${i}`)),
);
await Promise.all(dirPromises);
// Pass the custom limit directly to the function
await loadServerHierarchicalMemory(
cwd,
[],
true,
new FileDiscoveryService(projectRoot),
[],
DEFAULT_FOLDER_TRUST,
'tree', // importFormat
{
respectGitIgnore: true,
respectQwenIgnore: true,
},
1, // maxDirs
);
expect(consoleDebugSpy).toHaveBeenCalledWith(
expect.stringContaining('[DEBUG] [BfsFileSearch]'),
expect.stringContaining('Scanning [1/1]:'),
);
vi.mocked(console.debug).mockRestore();
const result = await loadServerHierarchicalMemory(
cwd,
[],
false,
new FileDiscoveryService(projectRoot),
[],
DEFAULT_FOLDER_TRUST,
);
expect(result).toEqual({
memoryContent: '',
fileCount: 0,
});
});

View File

@@ -8,9 +8,12 @@ import * as fs from 'node:fs/promises';
import * as fsSync from 'node:fs';
import * as path from 'node:path';
import { homedir } from 'node:os';
import { bfsFileSearch } from './bfsFileSearch.js';
import { getAllGeminiMdFilenames } from '../tools/memoryTool.js';
import type { FileDiscoveryService } from '../services/fileDiscoveryService.js';
import { processImports } from './memoryImportProcessor.js';
import type { FileFilteringOptions } from '../config/constants.js';
import { DEFAULT_MEMORY_FILE_FILTERING_OPTIONS } from '../config/constants.js';
import { QWEN_DIR } from './paths.js';
// Simple console logger, similar to the one previously in CLI's config.ts
@@ -83,6 +86,8 @@ async function getGeminiMdFilePathsInternal(
fileService: FileDiscoveryService,
extensionContextFilePaths: string[] = [],
folderTrust: boolean,
fileFilteringOptions: FileFilteringOptions,
maxDirs: number,
): Promise<string[]> {
const dirs = new Set<string>([
...includeDirectoriesToReadGemini,
@@ -104,6 +109,8 @@ async function getGeminiMdFilePathsInternal(
fileService,
extensionContextFilePaths,
folderTrust,
fileFilteringOptions,
maxDirs,
),
);
@@ -132,6 +139,8 @@ async function getGeminiMdFilePathsInternalForEachDir(
fileService: FileDiscoveryService,
extensionContextFilePaths: string[] = [],
folderTrust: boolean,
fileFilteringOptions: FileFilteringOptions,
maxDirs: number,
): Promise<string[]> {
const allPaths = new Set<string>();
const geminiMdFilenames = getAllGeminiMdFilenames();
@@ -176,7 +185,7 @@ async function getGeminiMdFilePathsInternalForEachDir(
// Not found, which is okay
}
} else if (dir && folderTrust) {
// FIX: Only perform the workspace search (upward scan from CWD to project root)
// FIX: Only perform the workspace search (upward and downward scans)
// if a valid currentWorkingDirectory is provided and it's not the home directory.
const resolvedCwd = path.resolve(dir);
if (debugMode)
@@ -216,6 +225,23 @@ async function getGeminiMdFilePathsInternalForEachDir(
currentDir = path.dirname(currentDir);
}
upwardPaths.forEach((p) => allPaths.add(p));
const mergedOptions: FileFilteringOptions = {
...DEFAULT_MEMORY_FILE_FILTERING_OPTIONS,
...fileFilteringOptions,
};
const downwardPaths = await bfsFileSearch(resolvedCwd, {
fileName: geminiMdFilename,
maxDirs,
debug: debugMode,
fileService,
fileFilteringOptions: mergedOptions,
});
downwardPaths.sort();
for (const dPath of downwardPaths) {
allPaths.add(dPath);
}
}
}
@@ -338,6 +364,8 @@ export async function loadServerHierarchicalMemory(
extensionContextFilePaths: string[] = [],
folderTrust: boolean,
importFormat: 'flat' | 'tree' = 'tree',
fileFilteringOptions?: FileFilteringOptions,
maxDirs: number = 200,
): Promise<LoadServerHierarchicalMemoryResponse> {
if (debugMode)
logger.debug(
@@ -355,6 +383,8 @@ export async function loadServerHierarchicalMemory(
fileService,
extensionContextFilePaths,
folderTrust,
fileFilteringOptions || DEFAULT_MEMORY_FILE_FILTERING_OPTIONS,
maxDirs,
);
if (filePaths.length === 0) {
if (debugMode) logger.debug('No QWEN.md files found in hierarchy.');
@@ -370,14 +400,6 @@ export async function loadServerHierarchicalMemory(
contentsWithPaths,
currentWorkingDirectory,
);
// Only count files that match configured memory filenames (e.g., QWEN.md),
// excluding system context files like output-language.md
const memoryFilenames = new Set(getAllGeminiMdFilenames());
const fileCount = contentsWithPaths.filter((item) =>
memoryFilenames.has(path.basename(item.filePath)),
).length;
if (debugMode)
logger.debug(
`Combined instructions length: ${combinedInstructions.length}`,
@@ -388,6 +410,6 @@ export async function loadServerHierarchicalMemory(
);
return {
memoryContent: combinedInstructions,
fileCount, // Only count the context files
fileCount: contentsWithPaths.length,
};
}

View File

@@ -0,0 +1,167 @@
/**
* @license
* Copyright 2025 Qwen Team
* SPDX-License-Identifier: Apache-2.0
*/
import { EnvHttpProxyAgent } from 'undici';
/**
* JavaScript runtime type
*/
export type Runtime = 'node' | 'bun' | 'unknown';
/**
* Detect the current JavaScript runtime
*/
export function detectRuntime(): Runtime {
if (typeof process !== 'undefined' && process.versions?.['bun']) {
return 'bun';
}
if (typeof process !== 'undefined' && process.versions?.node) {
return 'node';
}
return 'unknown';
}
/**
* Runtime fetch options for OpenAI SDK
*/
export type OpenAIRuntimeFetchOptions =
| {
dispatcher?: EnvHttpProxyAgent;
timeout?: false;
}
| undefined;
/**
* Runtime fetch options for Anthropic SDK
*/
export type AnthropicRuntimeFetchOptions = {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
httpAgent?: any;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
fetch?: any;
};
/**
* SDK type identifier
*/
export type SDKType = 'openai' | 'anthropic';
/**
* Build runtime-specific fetch options for OpenAI SDK
*/
export function buildRuntimeFetchOptions(
sdkType: 'openai',
): OpenAIRuntimeFetchOptions;
/**
* Build runtime-specific fetch options for Anthropic SDK
*/
export function buildRuntimeFetchOptions(
sdkType: 'anthropic',
): AnthropicRuntimeFetchOptions;
/**
* Build runtime-specific fetch options based on the detected runtime and SDK type
* This function applies runtime-specific configurations to handle timeout differences
* across Node.js and Bun, ensuring user-configured timeout works as expected.
*
* @param sdkType - The SDK type ('openai' or 'anthropic') to determine return type
* @returns Runtime-specific options compatible with the specified SDK
*/
export function buildRuntimeFetchOptions(
sdkType: SDKType,
): OpenAIRuntimeFetchOptions | AnthropicRuntimeFetchOptions {
const runtime = detectRuntime();
// Always disable bodyTimeout (set to 0) to let SDK's timeout parameter
// control the total request time. bodyTimeout only monitors intervals between
// data chunks, not the total request time, so we disable it to ensure user-configured
// timeout works as expected for both streaming and non-streaming requests.
switch (runtime) {
case 'bun': {
if (sdkType === 'openai') {
// Bun: Disable built-in 300s timeout to let OpenAI SDK timeout control
// This ensures user-configured timeout works as expected without interference
return {
timeout: false,
};
} else {
// Bun: Use custom fetch to disable built-in 300s timeout
// This allows Anthropic SDK timeout to control the request
// Note: Bun's fetch automatically uses proxy settings from environment variables
// (HTTP_PROXY, HTTPS_PROXY, NO_PROXY), so proxy behavior is preserved
const bunFetch: typeof fetch = async (
input: RequestInfo | URL,
init?: RequestInit,
) => {
const bunFetchOptions: RequestInit = {
...init,
// @ts-expect-error - Bun-specific timeout option
timeout: false,
};
return fetch(input, bunFetchOptions);
};
return {
fetch: bunFetch,
};
}
}
case 'node': {
// Node.js: Use EnvHttpProxyAgent to configure proxy and disable bodyTimeout
// EnvHttpProxyAgent automatically reads proxy settings from environment variables
// (HTTP_PROXY, HTTPS_PROXY, NO_PROXY, etc.) to preserve proxy functionality
// bodyTimeout is always 0 (disabled) to let SDK timeout control the request
try {
const agent = new EnvHttpProxyAgent({
bodyTimeout: 0, // Disable to let SDK timeout control total request time
});
if (sdkType === 'openai') {
return {
dispatcher: agent,
};
} else {
return {
httpAgent: agent,
};
}
} catch {
// If undici is not available, return appropriate default
if (sdkType === 'openai') {
return undefined;
} else {
return {};
}
}
}
default: {
// Unknown runtime: Try to use EnvHttpProxyAgent if available
// EnvHttpProxyAgent automatically reads proxy settings from environment variables
try {
const agent = new EnvHttpProxyAgent({
bodyTimeout: 0, // Disable to let SDK timeout control total request time
});
if (sdkType === 'openai') {
return {
dispatcher: agent,
};
} else {
return {
httpAgent: agent,
};
}
} catch {
if (sdkType === 'openai') {
return undefined;
} else {
return {};
}
}
}
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "@qwen-code/qwen-code-test-utils",
"version": "0.7.2-preview.0",
"version": "0.7.1",
"private": true,
"main": "src/index.ts",
"license": "Apache-2.0",

View File

@@ -2,7 +2,7 @@
"name": "qwen-code-vscode-ide-companion",
"displayName": "Qwen Code Companion",
"description": "Enable Qwen Code with direct access to your VS Code workspace.",
"version": "0.7.2-preview.0",
"version": "0.7.1",
"publisher": "qwenlm",
"icon": "assets/icon.png",
"repository": {