Merge remote-tracking branch 'upstream/main' into fix/issue-1186-schema-converter

This commit is contained in:
kefuxin
2025-12-16 11:08:51 +08:00
239 changed files with 9372 additions and 7082 deletions

27
docs/developers/_meta.ts Normal file
View File

@@ -0,0 +1,27 @@
export default {
'Contribute to Qwen Code': {
title: 'Contribute to Qwen Code',
type: 'separator',
},
architecture: 'Architecture',
roadmap: 'Roadmap',
contributing: 'Contributing Guide',
'Qwen Code SDK': {
title: 'Agent SDK',
type: 'separator',
},
'sdk-typescript': 'Typescript SDK',
'Dive Into Qwen Code': {
title: 'Dive Into Qwen Code',
type: 'separator',
},
tools: 'Tools',
extensions: {
display: 'hidden',
},
examples: {
display: 'hidden',
},
};

View File

@@ -0,0 +1,95 @@
# Qwen Code Architecture Overview
This document provides a high-level overview of Qwen Code's architecture.
## Core Components
Qwen Code is primarily composed of two main packages, along with a suite of tools that can be used by the system in the course of handling command-line input:
### 1. CLI Package (`packages/cli`)
**Purpose:** This contains the user-facing portion of Qwen Code, such as handling the initial user input, presenting the final output, and managing the overall user experience.
**Key Functions:**
- **Input Processing:** Handles user input through various methods including direct text entry, slash commands (e.g., `/help`, `/clear`, `/model`), at commands (`@file` for including file content), and exclamation mark commands (`!command` for shell execution).
- **History Management:** Maintains conversation history and enables features like session resumption.
- **Display Rendering:** Formats and presents responses to the user in the terminal with syntax highlighting and proper formatting.
- **Theme and UI Customization:** Supports customizable themes and UI elements for a personalized experience.
- **Configuration Settings:** Manages various configuration options through JSON settings files, environment variables, and command-line arguments.
### 2. Core Package (`packages/core`)
**Purpose:** This acts as the backend for Qwen Code. It receives requests sent from `packages/cli`, orchestrates interactions with the configured model API, and manages the execution of available tools.
**Key Functions:**
- **API Client:** Communicates with the Qwen model API to send prompts and receive responses.
- **Prompt Construction:** Builds appropriate prompts for the model, incorporating conversation history and available tool definitions.
- **Tool Registration and Execution:** Manages the registration of available tools and executes them based on model requests.
- **State Management:** Maintains conversation and session state information.
- **Server-side Configuration:** Handles server-side configuration and settings.
### 3. Tools (`packages/core/src/tools/`)
**Purpose:** These are individual modules that extend the capabilities of the Qwen model, allowing it to interact with the local environment (e.g., file system, shell commands, web fetching).
**Interaction:** `packages/core` invokes these tools based on requests from the Qwen model.
**Common Tools Include:**
- **File Operations:** Reading, writing, and editing files
- **Shell Commands:** Executing system commands with user approval for potentially dangerous operations
- **Search Tools:** Finding files and searching content within the project
- **Web Tools:** Fetching content from the web
- **MCP Integration:** Connecting to Model Context Protocol servers for extended capabilities
## Interaction Flow
A typical interaction with Qwen Code follows this flow:
1. **User Input:** The user types a prompt or command into the terminal, which is managed by `packages/cli`.
2. **Request to Core:** `packages/cli` sends the user's input to `packages/core`.
3. **Request Processing:** The core package:
- Constructs an appropriate prompt for the configured model API, possibly including conversation history and available tool definitions.
- Sends the prompt to the model API.
4. **Model API Response:** The model API processes the prompt and returns a response. This response might be a direct answer or a request to use one of the available tools.
5. **Tool Execution (if applicable):**
- When the model API requests a tool, the core package prepares to execute it.
- If the requested tool can modify the file system or execute shell commands, the user is first given details of the tool and its arguments, and the user must approve the execution.
- Read-only operations, such as reading files, might not require explicit user confirmation to proceed.
- Once confirmed, or if confirmation is not required, the core package executes the relevant action within the relevant tool, and the result is sent back to the model API by the core package.
- The model API processes the tool result and generates a final response.
6. **Response to CLI:** The core package sends the final response back to the CLI package.
7. **Display to User:** The CLI package formats and displays the response to the user in the terminal.
## Configuration Options
Qwen Code offers multiple ways to configure its behavior:
### Configuration Layers (in order of precedence)
1. Command-line arguments
2. Environment variables
3. Project settings file (`.qwen/settings.json`)
4. User settings file (`~/.qwen/settings.json`)
5. System settings files
6. Default values
### Key Configuration Categories
- **General Settings:** vim mode, preferred editor, auto-update preferences
- **UI Settings:** Theme customization, banner visibility, footer display
- **Model Settings:** Model selection, session turn limits, compression settings
- **Context Settings:** Context file names, directory inclusion, file filtering
- **Tool Settings:** Approval modes, sandboxing, tool restrictions
- **Privacy Settings:** Usage statistics collection
- **Advanced Settings:** Debug options, custom bug reporting commands
## Key Design Principles
- **Modularity:** Separating the CLI (frontend) from the Core (backend) allows for independent development and potential future extensions (e.g., different frontends for the same backend).
- **Extensibility:** The tool system is designed to be extensible, allowing new capabilities to be added through custom tools or MCP server integration.
- **User Experience:** The CLI focuses on providing a rich and interactive terminal experience with features like syntax highlighting, customizable themes, and intuitive command structures.
- **Security:** Implements approval mechanisms for potentially dangerous operations and sandboxing options to protect the user's system.
- **Flexibility:** Supports multiple configuration methods and can adapt to different workflows and environments.

View File

@@ -0,0 +1,303 @@
# How to Contribute
We would love to accept your patches and contributions to this project.
## Contribution Process
### Code Reviews
All submissions, including submissions by project members, require review. We
use [GitHub pull requests](https://docs.github.com/articles/about-pull-requests)
for this purpose.
### Pull Request Guidelines
To help us review and merge your PRs quickly, please follow these guidelines. PRs that do not meet these standards may be closed.
#### 1. Link to an Existing Issue
All PRs should be linked to an existing issue in our tracker. This ensures that every change has been discussed and is aligned with the project's goals before any code is written.
- **For bug fixes:** The PR should be linked to the bug report issue.
- **For features:** The PR should be linked to the feature request or proposal issue that has been approved by a maintainer.
If an issue for your change doesn't exist, please **open one first** and wait for feedback before you start coding.
#### 2. Keep It Small and Focused
We favor small, atomic PRs that address a single issue or add a single, self-contained feature.
- **Do:** Create a PR that fixes one specific bug or adds one specific feature.
- **Don't:** Bundle multiple unrelated changes (e.g., a bug fix, a new feature, and a refactor) into a single PR.
Large changes should be broken down into a series of smaller, logical PRs that can be reviewed and merged independently.
#### 3. Use Draft PRs for Work in Progress
If you'd like to get early feedback on your work, please use GitHub's **Draft Pull Request** feature. This signals to the maintainers that the PR is not yet ready for a formal review but is open for discussion and initial feedback.
#### 4. Ensure All Checks Pass
Before submitting your PR, ensure that all automated checks are passing by running `npm run preflight`. This command runs all tests, linting, and other style checks.
#### 5. Update Documentation
If your PR introduces a user-facing change (e.g., a new command, a modified flag, or a change in behavior), you must also update the relevant documentation in the `/docs` directory.
#### 6. Write Clear Commit Messages and a Good PR Description
Your PR should have a clear, descriptive title and a detailed description of the changes. Follow the [Conventional Commits](https://www.conventionalcommits.org/) standard for your commit messages.
- **Good PR Title:** `feat(cli): Add --json flag to 'config get' command`
- **Bad PR Title:** `Made some changes`
In the PR description, explain the "why" behind your changes and link to the relevant issue (e.g., `Fixes #123`).
## Development Setup and Workflow
This section guides contributors on how to build, modify, and understand the development setup of this project.
### Setting Up the Development Environment
**Prerequisites:**
1. **Node.js**:
- **Development:** Please use Node.js `~20.19.0`. This specific version is required due to an upstream development dependency issue. You can use a tool like [nvm](https://github.com/nvm-sh/nvm) to manage Node.js versions.
- **Production:** For running the CLI in a production environment, any version of Node.js `>=20` is acceptable.
2. **Git**
### Build Process
To clone the repository:
```bash
git clone https://github.com/QwenLM/qwen-code.git # Or your fork's URL
cd qwen-code
```
To install dependencies defined in `package.json` as well as root dependencies:
```bash
npm install
```
To build the entire project (all packages):
```bash
npm run build
```
This command typically compiles TypeScript to JavaScript, bundles assets, and prepares the packages for execution. Refer to `scripts/build.js` and `package.json` scripts for more details on what happens during the build.
### Enabling Sandboxing
[Sandboxing](#sandboxing) is highly recommended and requires, at a minimum, setting `QWEN_SANDBOX=true` in your `~/.env` and ensuring a sandboxing provider (e.g. `macOS Seatbelt`, `docker`, or `podman`) is available. See [Sandboxing](#sandboxing) for details.
To build both the `qwen-code` CLI utility and the sandbox container, run `build:all` from the root directory:
```bash
npm run build:all
```
To skip building the sandbox container, you can use `npm run build` instead.
### Running
To start the Qwen Code application from the source code (after building), run the following command from the root directory:
```bash
npm start
```
If you'd like to run the source build outside of the qwen-code folder, you can utilize `npm link path/to/qwen-code/packages/cli` (see: [docs](https://docs.npmjs.com/cli/v9/commands/npm-link)) to run with `qwen-code`
### Running Tests
This project contains two types of tests: unit tests and integration tests.
#### Unit Tests
To execute the unit test suite for the project:
```bash
npm run test
```
This will run tests located in the `packages/core` and `packages/cli` directories. Ensure tests pass before submitting any changes. For a more comprehensive check, it is recommended to run `npm run preflight`.
#### Integration Tests
The integration tests are designed to validate the end-to-end functionality of Qwen Code. They are not run as part of the default `npm run test` command.
To run the integration tests, use the following command:
```bash
npm run test:e2e
```
For more detailed information on the integration testing framework, please see the [Integration Tests documentation](./docs/integration-tests.md).
### Linting and Preflight Checks
To ensure code quality and formatting consistency, run the preflight check:
```bash
npm run preflight
```
This command will run ESLint, Prettier, all tests, and other checks as defined in the project's `package.json`.
_ProTip_
after cloning create a git precommit hook file to ensure your commits are always clean.
```bash
echo "
# Run npm build and check for errors
if ! npm run preflight; then
echo "npm build failed. Commit aborted."
exit 1
fi
" > .git/hooks/pre-commit && chmod +x .git/hooks/pre-commit
```
#### Formatting
To separately format the code in this project by running the following command from the root directory:
```bash
npm run format
```
This command uses Prettier to format the code according to the project's style guidelines.
#### Linting
To separately lint the code in this project, run the following command from the root directory:
```bash
npm run lint
```
### Coding Conventions
- Please adhere to the coding style, patterns, and conventions used throughout the existing codebase.
- **Imports:** Pay special attention to import paths. The project uses ESLint to enforce restrictions on relative imports between packages.
### Project Structure
- `packages/`: Contains the individual sub-packages of the project.
- `cli/`: The command-line interface.
- `core/`: The core backend logic for Qwen Code.
- `docs/`: Contains all project documentation.
- `scripts/`: Utility scripts for building, testing, and development tasks.
For more detailed architecture, see `docs/architecture.md`.
## Documentation Development
This section describes how to develop and preview the documentation locally.
### Prerequisites
1. Ensure you have Node.js (version 18+) installed
2. Have npm or yarn available
### Setup Documentation Site Locally
To work on the documentation and preview changes locally:
1. Navigate to the `docs-site` directory:
```bash
cd docs-site
```
2. Install dependencies:
```bash
npm install
```
3. Link the documentation content from the main `docs` directory:
```bash
npm run link
```
This creates a symbolic link from `../docs` to `content` in the docs-site project, allowing the documentation content to be served by the Next.js site.
4. Start the development server:
```bash
npm run dev
```
5. Open [http://localhost:3000](http://localhost:3000) in your browser to see the documentation site with live updates as you make changes.
Any changes made to the documentation files in the main `docs` directory will be reflected immediately in the documentation site.
## Debugging
### VS Code:
0. Run the CLI to interactively debug in VS Code with `F5`
1. Start the CLI in debug mode from the root directory:
```bash
npm run debug
```
This command runs `node --inspect-brk dist/index.js` within the `packages/cli` directory, pausing execution until a debugger attaches. You can then open `chrome://inspect` in your Chrome browser to connect to the debugger.
2. In VS Code, use the "Attach" launch configuration (found in `.vscode/launch.json`).
Alternatively, you can use the "Launch Program" configuration in VS Code if you prefer to launch the currently open file directly, but 'F5' is generally recommended.
To hit a breakpoint inside the sandbox container run:
```bash
DEBUG=1 qwen-code
```
**Note:** If you have `DEBUG=true` in a project's `.env` file, it won't affect qwen-code due to automatic exclusion. Use `.qwen-code/.env` files for qwen-code specific debug settings.
### React DevTools
To debug the CLI's React-based UI, you can use React DevTools. Ink, the library used for the CLI's interface, is compatible with React DevTools version 4.x.
1. **Start the Qwen Code application in development mode:**
```bash
DEV=true npm start
```
2. **Install and run React DevTools version 4.28.5 (or the latest compatible 4.x version):**
You can either install it globally:
```bash
npm install -g react-devtools@4.28.5
react-devtools
```
Or run it directly using npx:
```bash
npx react-devtools@4.28.5
```
Your running CLI application should then connect to React DevTools.
## Sandboxing
> TBD
## Manual Publish
We publish an artifact for each commit to our internal registry. But if you need to manually cut a local build, then run the following commands:
```
npm run clean
npm install
npm run auth
npm run prerelease:dev
npm publish --workspaces
```

View File

@@ -0,0 +1,9 @@
export default {
npm: 'NPM',
telemetry: 'Telemetry',
'integration-tests': 'Integration Tests',
'issue-and-pr-automation': 'Issue and PR Automation',
deployment: {
display: 'hidden',
},
};

View File

@@ -0,0 +1,117 @@
# Qwen Code Execution and Deployment
This document describes how to run Qwen Code and explains the deployment architecture that Qwen Code uses.
## Running Qwen Code
There are several ways to run Qwen Code. The option you choose depends on how you intend to use it.
---
### 1. Standard installation (Recommended for typical users)
This is the recommended way for end-users to install Qwen Code. It involves downloading the Qwen Code package from the NPM registry.
- **Global install:**
```bash
npm install -g @qwen-code/qwen-code
```
Then, run the CLI from anywhere:
```bash
qwen
```
- **NPX execution:**
```bash
# Execute the latest version from NPM without a global install
npx @qwen-code/qwen-code
```
---
### 2. Running in a sandbox (Docker/Podman)
For security and isolation, Qwen Code can be run inside a container. This is the default way that the CLI executes tools that might have side effects.
- **Directly from the Registry:**
You can run the published sandbox image directly. This is useful for environments where you only have Docker and want to run the CLI.
```bash
# Run the published sandbox image
docker run --rm -it ghcr.io/qwenlm/qwen-code:0.0.11
```
- **Using the `--sandbox` flag:**
If you have Qwen Code installed locally (using the standard installation described above), you can instruct it to run inside the sandbox container.
```bash
qwen --sandbox -y -p "your prompt here"
```
---
### 3. Running from source (Recommended for Qwen Code contributors)
Contributors to the project will want to run the CLI directly from the source code.
- **Development Mode:**
This method provides hot-reloading and is useful for active development.
```bash
# From the root of the repository
npm run start
```
- **Production-like mode (Linked package):**
This method simulates a global installation by linking your local package. It's useful for testing a local build in a production workflow.
```bash
# Link the local cli package to your global node_modules
npm link packages/cli
# Now you can run your local version using the `qwen` command
qwen
```
---
### 4. Running the latest Qwen Code commit from GitHub
You can run the most recently committed version of Qwen Code directly from the GitHub repository. This is useful for testing features still in development.
```bash
# Execute the CLI directly from the main branch on GitHub
npx https://github.com/QwenLM/qwen-code
```
## Deployment architecture
The execution methods described above are made possible by the following architectural components and processes:
**NPM packages**
Qwen Code project is a monorepo that publishes core packages to the NPM registry:
- `@qwen-code/qwen-code-core`: The backend, handling logic and tool execution.
- `@qwen-code/qwen-code`: The user-facing frontend.
These packages are used when performing the standard installation and when running Qwen Code from the source.
**Build and packaging processes**
There are two distinct build processes used, depending on the distribution channel:
- **NPM publication:** For publishing to the NPM registry, the TypeScript source code in `@qwen-code/qwen-code-core` and `@qwen-code/qwen-code` is transpiled into standard JavaScript using the TypeScript Compiler (`tsc`). The resulting `dist/` directory is what gets published in the NPM package. This is a standard approach for TypeScript libraries.
- **GitHub `npx` execution:** When running the latest version of Qwen Code directly from GitHub, a different process is triggered by the `prepare` script in `package.json`. This script uses `esbuild` to bundle the entire application and its dependencies into a single, self-contained JavaScript file. This bundle is created on-the-fly on the user's machine and is not checked into the repository.
**Docker sandbox image**
The Docker-based execution method is supported by the `qwen-code-sandbox` container image. This image is published to a container registry and contains a pre-installed, global version of Qwen Code.
## Release process
The release process is automated through GitHub Actions. The release workflow performs the following actions:
1. Build the NPM packages using `tsc`.
2. Publish the NPM packages to the artifact registry.
3. Create GitHub releases with bundled assets.

View File

@@ -0,0 +1,137 @@
# Integration Tests
This document provides information about the integration testing framework used in this project.
## Overview
The integration tests are designed to validate the end-to-end functionality of Qwen Code. They execute the built binary in a controlled environment and verify that it behaves as expected when interacting with the file system.
These tests are located in the `integration-tests` directory and are run using a custom test runner.
## Running the tests
The integration tests are not run as part of the default `npm run test` command. They must be run explicitly using the `npm run test:integration:all` script.
The integration tests can also be run using the following shortcut:
```bash
npm run test:e2e
```
## Running a specific set of tests
To run a subset of test files, you can use `npm run <integration test command> <file_name1> ....` where &lt;integration test command&gt; is either `test:e2e` or `test:integration*` and `<file_name>` is any of the `.test.js` files in the `integration-tests/` directory. For example, the following command runs `list_directory.test.js` and `write_file.test.js`:
```bash
npm run test:e2e list_directory write_file
```
### Running a single test by name
To run a single test by its name, use the `--test-name-pattern` flag:
```bash
npm run test:e2e -- --test-name-pattern "reads a file"
```
### Running all tests
To run the entire suite of integration tests, use the following command:
```bash
npm run test:integration:all
```
### Sandbox matrix
The `all` command will run tests for `no sandboxing`, `docker` and `podman`.
Each individual type can be run using the following commands:
```bash
npm run test:integration:sandbox:none
```
```bash
npm run test:integration:sandbox:docker
```
```bash
npm run test:integration:sandbox:podman
```
## Diagnostics
The integration test runner provides several options for diagnostics to help track down test failures.
### Keeping test output
You can preserve the temporary files created during a test run for inspection. This is useful for debugging issues with file system operations.
To keep the test output set the `KEEP_OUTPUT` environment variable to `true`.
```bash
KEEP_OUTPUT=true npm run test:integration:sandbox:none
```
When output is kept, the test runner will print the path to the unique directory for the test run.
### Verbose output
For more detailed debugging, set the `VERBOSE` environment variable to `true`.
```bash
VERBOSE=true npm run test:integration:sandbox:none
```
When using `VERBOSE=true` and `KEEP_OUTPUT=true` in the same command, the output is streamed to the console and also saved to a log file within the test's temporary directory.
The verbose output is formatted to clearly identify the source of the logs:
```
--- TEST: <log dir>:<test-name> ---
... output from the qwen command ...
--- END TEST: <log dir>:<test-name> ---
```
## Linting and formatting
To ensure code quality and consistency, the integration test files are linted as part of the main build process. You can also manually run the linter and auto-fixer.
### Running the linter
To check for linting errors, run the following command:
```bash
npm run lint
```
You can include the `:fix` flag in the command to automatically fix any fixable linting errors:
```bash
npm run lint:fix
```
## Directory structure
The integration tests create a unique directory for each test run inside the `.integration-tests` directory. Within this directory, a subdirectory is created for each test file, and within that, a subdirectory is created for each individual test case.
This structure makes it easy to locate the artifacts for a specific test run, file, or case.
```
.integration-tests/
└── <run-id>/
└── <test-file-name>.test.js/
└── <test-case-name>/
├── output.log
└── ...other test artifacts...
```
## Continuous integration
To ensure the integration tests are always run, a GitHub Actions workflow is defined in `.github/workflows/e2e.yml`. This workflow automatically runs the integrations tests for pull requests against the `main` branch, or when a pull request is added to a merge queue.
The workflow runs the tests in different sandboxing environments to ensure Qwen Code is tested across each:
- `sandbox:none`: Runs the tests without any sandboxing.
- `sandbox:docker`: Runs the tests in a Docker container.
- `sandbox:podman`: Runs the tests in a Podman container.

View File

@@ -0,0 +1,84 @@
# Automation and Triage Processes
This document provides a detailed overview of the automated processes we use to manage and triage issues and pull requests. Our goal is to provide prompt feedback and ensure that contributions are reviewed and integrated efficiently. Understanding this automation will help you as a contributor know what to expect and how to best interact with our repository bots.
## Guiding Principle: Issues and Pull Requests
First and foremost, almost every Pull Request (PR) should be linked to a corresponding Issue. The issue describes the "what" and the "why" (the bug or feature), while the PR is the "how" (the implementation). This separation helps us track work, prioritize features, and maintain clear historical context. Our automation is built around this principle.
---
## Detailed Automation Workflows
Here is a breakdown of the specific automation workflows that run in our repository.
### 1. When you open an Issue: `Automated Issue Triage`
This is the first bot you will interact with when you create an issue. Its job is to perform an initial analysis and apply the correct labels.
- **Workflow File**: `.github/workflows/qwen-automated-issue-triage.yml`
- **When it runs**: Immediately after an issue is created or reopened.
- **What it does**:
- It uses a Qwen model to analyze the issue's title and body against a detailed set of guidelines.
- **Applies one `area/*` label**: Categorizes the issue into a functional area of the project (e.g., `area/ux`, `area/models`, `area/platform`).
- **Applies one `kind/*` label**: Identifies the type of issue (e.g., `kind/bug`, `kind/enhancement`, `kind/question`).
- **Applies one `priority/*` label**: Assigns a priority from P0 (critical) to P3 (low) based on the described impact.
- **May apply `status/need-information`**: If the issue lacks critical details (like logs or reproduction steps), it will be flagged for more information.
- **May apply `status/need-retesting`**: If the issue references a CLI version that is more than six versions old, it will be flagged for retesting on a current version.
- **What you should do**:
- Fill out the issue template as completely as possible. The more detail you provide, the more accurate the triage will be.
- If the `status/need-information` label is added, please provide the requested details in a comment.
### 2. When you open a Pull Request: `Continuous Integration (CI)`
This workflow ensures that all changes meet our quality standards before they can be merged.
- **Workflow File**: `.github/workflows/ci.yml`
- **When it runs**: On every push to a pull request.
- **What it does**:
- **Lint**: Checks that your code adheres to our project's formatting and style rules.
- **Test**: Runs our full suite of automated tests across macOS, Windows, and Linux, and on multiple Node.js versions. This is the most time-consuming part of the CI process.
- **Post Coverage Comment**: After all tests have successfully passed, a bot will post a comment on your PR. This comment provides a summary of how well your changes are covered by tests.
- **What you should do**:
- Ensure all CI checks pass. A green checkmark ✅ will appear next to your commit when everything is successful.
- If a check fails (a red "X" ❌), click the "Details" link next to the failed check to view the logs, identify the problem, and push a fix.
### 3. Ongoing Triage for Pull Requests: `PR Auditing and Label Sync`
This workflow runs periodically to ensure all open PRs are correctly linked to issues and have consistent labels.
- **Workflow File**: `.github/workflows/qwen-scheduled-pr-triage.yml`
- **When it runs**: Every 15 minutes on all open pull requests.
- **What it does**:
- **Checks for a linked issue**: The bot scans your PR description for a keyword that links it to an issue (e.g., `Fixes #123`, `Closes #456`).
- **Adds `status/need-issue`**: If no linked issue is found, the bot will add the `status/need-issue` label to your PR. This is a clear signal that an issue needs to be created and linked.
- **Synchronizes labels**: If an issue _is_ linked, the bot ensures the PR's labels perfectly match the issue's labels. It will add any missing labels and remove any that don't belong, and it will remove the `status/need-issue` label if it was present.
- **What you should do**:
- **Always link your PR to an issue.** This is the most important step. Add a line like `Resolves #<issue-number>` to your PR description.
- This will ensure your PR is correctly categorized and moves through the review process smoothly.
### 4. Ongoing Triage for Issues: `Scheduled Issue Triage`
This is a fallback workflow to ensure that no issue gets missed by the triage process.
- **Workflow File**: `.github/workflows/qwen-scheduled-issue-triage.yml`
- **When it runs**: Every hour on all open issues.
- **What it does**:
- It actively seeks out issues that either have no labels at all or still have the `status/need-triage` label.
- It then triggers the same powerful QwenCode-based analysis as the initial triage bot to apply the correct labels.
- **What you should do**:
- You typically don't need to do anything. This workflow is a safety net to ensure every issue is eventually categorized, even if the initial triage fails.
### 5. Release Automation
This workflow handles the process of packaging and publishing new versions of Qwen Code.
- **Workflow File**: `.github/workflows/release.yml`
- **When it runs**: On a daily schedule for "nightly" releases, and manually for official patch/minor releases.
- **What it does**:
- Automatically builds the project, bumps the version numbers, and publishes the packages to npm.
- Creates a corresponding release on GitHub with generated release notes.
- **What you should do**:
- As a contributor, you don't need to do anything for this process. You can be confident that once your PR is merged into the `main` branch, your changes will be included in the very next nightly release.
We hope this detailed overview is helpful. If you have any questions about our automation or processes, please don't hesitate to ask!

View File

@@ -0,0 +1,258 @@
# Package Overview
This monorepo contains two main packages: `@qwen-code/qwen-code` and `@qwen-code/qwen-code-core`.
## `@qwen-code/qwen-code`
This is the main package for Qwen Code. It is responsible for the user interface, command parsing, and all other user-facing functionality.
When this package is published, it is bundled into a single executable file. This bundle includes all of the package's dependencies, including `@qwen-code/qwen-code-core`. This means that whether a user installs the package with `npm install -g @qwen-code/qwen-code` or runs it directly with `npx @qwen-code/qwen-code`, they are using this single, self-contained executable.
## `@qwen-code/qwen-code-core`
This package contains the core logic for the CLI. It is responsible for making API requests to configured providers, handling authentication, and managing the local cache.
This package is not bundled. When it is published, it is published as a standard Node.js package with its own dependencies. This allows it to be used as a standalone package in other projects, if needed. All transpiled js code in the `dist` folder is included in the package.
# Release Process
This project follows a structured release process to ensure that all packages are versioned and published correctly. The process is designed to be as automated as possible.
## How To Release
Releases are managed through the [release.yml](https://github.com/QwenLM/qwen-code/actions/workflows/release.yml) GitHub Actions workflow. To perform a manual release for a patch or hotfix:
1. Navigate to the **Actions** tab of the repository.
2. Select the **Release** workflow from the list.
3. Click the **Run workflow** dropdown button.
4. Fill in the required inputs:
- **Version**: The exact version to release (e.g., `v0.2.1`).
- **Ref**: The branch or commit SHA to release from (defaults to `main`).
- **Dry Run**: Leave as `true` to test the workflow without publishing, or set to `false` to perform a live release.
5. Click **Run workflow**.
## Release Types
The project supports multiple types of releases:
### Stable Releases
Regular stable releases for production use.
### Preview Releases
Weekly preview releases every Tuesday at 23:59 UTC for early access to upcoming features.
### Nightly Releases
Daily nightly releases at midnight UTC for bleeding-edge development testing.
## Automated Release Schedule
- **Nightly**: Every day at midnight UTC
- **Preview**: Every Tuesday at 23:59 UTC
- **Stable**: Manual releases triggered by maintainers
### How to Use Different Release Types
To install the latest version of each type:
```bash
# Stable (default)
npm install -g @qwen-code/qwen-code
# Preview
npm install -g @qwen-code/qwen-code@preview
# Nightly
npm install -g @qwen-code/qwen-code@nightly
```
### Release Process Details
Every scheduled or manual release follows these steps:
1. Checks out the specified code (latest from `main` branch or specific commit).
2. Installs all dependencies.
3. Runs the full suite of `preflight` checks and integration tests.
4. If all tests succeed, it calculates the appropriate version number based on release type.
5. Builds and publishes the packages to npm with the appropriate dist-tag.
6. Creates a GitHub Release for the version.
### Failure Handling
If any step in the release workflow fails, it will automatically create a new issue in the repository with the labels `bug` and a type-specific failure label (e.g., `nightly-failure`, `preview-failure`). The issue will contain a link to the failed workflow run for easy debugging.
## Release Validation
After pushing a new release smoke testing should be performed to ensure that the packages are working as expected. This can be done by installing the packages locally and running a set of tests to ensure that they are functioning correctly.
- `npx -y @qwen-code/qwen-code@latest --version` to validate the push worked as expected if you were not doing a rc or dev tag
- `npx -y @qwen-code/qwen-code@<release tag> --version` to validate the tag pushed appropriately
- _This is destructive locally_ `npm uninstall @qwen-code/qwen-code && npm uninstall -g @qwen-code/qwen-code && npm cache clean --force && npm install @qwen-code/qwen-code@<version>`
- Smoke testing a basic run through of exercising a few llm commands and tools is recommended to ensure that the packages are working as expected. We'll codify this more in the future.
## When to merge the version change, or not?
The above pattern for creating patch or hotfix releases from current or older commits leaves the repository in the following state:
1. The Tag (`vX.Y.Z-patch.1`): This tag correctly points to the original commit on main
that contains the stable code you intended to release. This is crucial. Anyone checking
out this tag gets the exact code that was published.
2. The Branch (`release-vX.Y.Z-patch.1`): This branch contains one new commit on top of the
tagged commit. That new commit only contains the version number change in package.json
(and other related files like package-lock.json).
This separation is good. It keeps your main branch history clean of release-specific
version bumps until you decide to merge them.
This is the critical decision, and it depends entirely on the nature of the release.
### Merge Back for Stable Patches and Hotfixes
You almost always want to merge the `release-<tag>` branch back into `main` for any
stable patch or hotfix release.
- Why? The primary reason is to update the version in main's package.json. If you release
v1.2.1 from an older commit but never merge the version bump back, your main branch's
package.json will still say "version": "1.2.0". The next developer who starts work for
the next feature release (v1.3.0) will be branching from a codebase that has an
incorrect, older version number. This leads to confusion and requires manual version
bumping later.
- The Process: After the release-v1.2.1 branch is created and the package is successfully
published, you should open a pull request to merge release-v1.2.1 into main. This PR
will contain just one commit: "chore: bump version to v1.2.1". It's a clean, simple
integration that keeps your main branch in sync with the latest released version.
### Do NOT Merge Back for Pre-Releases (RC, Beta, Dev)
You typically do not merge release branches for pre-releases back into `main`.
- Why? Pre-release versions (e.g., v1.3.0-rc.1, v1.3.0-rc.2) are, by definition, not
stable and are temporary. You don't want to pollute your main branch's history with a
series of version bumps for release candidates. The package.json in main should reflect
the latest stable release version, not an RC.
- The Process: The release-v1.3.0-rc.1 branch is created, the npm publish --tag rc happens,
and then... the branch has served its purpose. You can simply delete it. The code for
the RC is already on main (or a feature branch), so no functional code is lost. The
release branch was just a temporary vehicle for the version number.
## Local Testing and Validation: Changes to the Packaging and Publishing Process
If you need to test the release process without actually publishing to NPM or creating a public GitHub release, you can trigger the workflow manually from the GitHub UI.
1. Go to the [Actions tab](https://github.com/QwenLM/qwen-code/actions/workflows/release.yml) of the repository.
2. Click on the "Run workflow" dropdown.
3. Leave the `dry_run` option checked (`true`).
4. Click the "Run workflow" button.
This will run the entire release process but will skip the `npm publish` and `gh release create` steps. You can inspect the workflow logs to ensure everything is working as expected.
It is crucial to test any changes to the packaging and publishing process locally before committing them. This ensures that the packages will be published correctly and that they will work as expected when installed by a user.
To validate your changes, you can perform a dry run of the publishing process. This will simulate the publishing process without actually publishing the packages to the npm registry.
```bash
npm_package_version=9.9.9 SANDBOX_IMAGE_REGISTRY="registry" SANDBOX_IMAGE_NAME="thename" npm run publish:npm --dry-run
```
This command will do the following:
1. Build all the packages.
2. Run all the prepublish scripts.
3. Create the package tarballs that would be published to npm.
4. Print a summary of the packages that would be published.
You can then inspect the generated tarballs to ensure that they contain the correct files and that the `package.json` files have been updated correctly. The tarballs will be created in the root of each package's directory (e.g., `packages/cli/qwen-code-0.1.6.tgz`).
By performing a dry run, you can be confident that your changes to the packaging process are correct and that the packages will be published successfully.
## Release Deep Dive
The main goal of the release process is to take the source code from the packages/ directory, build it, and assemble a
clean, self-contained package in a temporary `dist` directory at the root of the project. This `dist` directory is what
actually gets published to NPM.
Here are the key stages:
Stage 1: Pre-Release Sanity Checks and Versioning
- What happens: Before any files are moved, the process ensures the project is in a good state. This involves running tests,
linting, and type-checking (npm run preflight). The version number in the root package.json and packages/cli/package.json
is updated to the new release version.
- Why: This guarantees that only high-quality, working code is released. Versioning is the first step to signify a new
release.
Stage 2: Building the Source Code
- What happens: The TypeScript source code in packages/core/src and packages/cli/src is compiled into JavaScript.
- File movement:
- packages/core/src/\*_/_.ts -> compiled to -> packages/core/dist/
- packages/cli/src/\*_/_.ts -> compiled to -> packages/cli/dist/
- Why: The TypeScript code written during development needs to be converted into plain JavaScript that can be run by
Node.js. The core package is built first as the cli package depends on it.
Stage 3: Bundling and Assembling the Final Publishable Package
This is the most critical stage where files are moved and transformed into their final state for publishing. The process uses modern bundling techniques to create the final package.
1. Bundle Creation:
- What happens: The prepare-package.js script creates a clean distribution package in the `dist` directory.
- Key transformations:
- Copies README.md and LICENSE to dist/
- Copies locales folder for internationalization
- Creates a clean package.json for distribution with only necessary dependencies
- Includes runtime dependencies like tiktoken
- Maintains optional dependencies for node-pty
2. The JavaScript Bundle is Created:
- What happens: The built JavaScript from both packages/core/dist and packages/cli/dist are bundled into a single,
executable JavaScript file using esbuild.
- File location: dist/cli.js
- Why: This creates a single, optimized file that contains all the necessary application code. It simplifies the package
by removing the need for complex dependency resolution at install time.
3. Static and Supporting Files are Copied:
- What happens: Essential files that are not part of the source code but are required for the package to work correctly
or be well-described are copied into the `dist` directory.
- File movement:
- README.md -> dist/README.md
- LICENSE -> dist/LICENSE
- locales/ -> dist/locales/
- Vendor files -> dist/vendor/
- Why:
- The README.md and LICENSE are standard files that should be included in any NPM package.
- Locales support internationalization features
- Vendor files contain necessary runtime dependencies
Stage 4: Publishing to NPM
- What happens: The npm publish command is run from inside the root `dist` directory.
- Why: By running npm publish from within the `dist` directory, only the files we carefully assembled in Stage 3 are uploaded
to the NPM registry. This prevents any source code, test files, or development configurations from being accidentally
published, resulting in a clean and minimal package for users.
This process ensures that the final published artifact is a purpose-built, clean, and efficient representation of the
project, rather than a direct copy of the development workspace.
## NPM Workspaces
This project uses [NPM Workspaces](https://docs.npmjs.com/cli/v10/using-npm/workspaces) to manage the packages within this monorepo. This simplifies development by allowing us to manage dependencies and run scripts across multiple packages from the root of the project.
### How it Works
The root `package.json` file defines the workspaces for this project:
```json
{
"workspaces": ["packages/*"]
}
```
This tells NPM that any folder inside the `packages` directory is a separate package that should be managed as part of the workspace.
### Benefits of Workspaces
- **Simplified Dependency Management**: Running `npm install` from the root of the project will install all dependencies for all packages in the workspace and link them together. This means you don't need to run `npm install` in each package's directory.
- **Automatic Linking**: Packages within the workspace can depend on each other. When you run `npm install`, NPM will automatically create symlinks between the packages. This means that when you make changes to one package, the changes are immediately available to other packages that depend on it.
- **Simplified Script Execution**: You can run scripts in any package from the root of the project using the `--workspace` flag. For example, to run the `build` script in the `cli` package, you can run `npm run build --workspace @qwen-code/qwen-code`.

View File

@@ -0,0 +1,297 @@
# Observability with OpenTelemetry
Learn how to enable and setup OpenTelemetry for Qwen Code.
- [Observability with OpenTelemetry](#observability-with-opentelemetry)
- [Key Benefits](#key-benefits)
- [OpenTelemetry Integration](#opentelemetry-integration)
- [Configuration](#configuration)
- [Aliyun Telemetry](#aliyun-telemetry)
- [Prerequisites](#prerequisites)
- [Direct Export (Recommended)](#direct-export-recommended)
- [Local Telemetry](#local-telemetry)
- [File-based Output (Recommended)](#file-based-output-recommended)
- [Collector-Based Export (Advanced)](#collector-based-export-advanced)
- [Logs and Metrics](#logs-and-metrics)
- [Logs](#logs)
- [Metrics](#metrics)
## Key Benefits
- **🔍 Usage Analytics**: Understand interaction patterns and feature adoption
across your team
- **⚡ Performance Monitoring**: Track response times, token consumption, and
resource utilization
- **🐛 Real-time Debugging**: Identify bottlenecks, failures, and error patterns
as they occur
- **📊 Workflow Optimization**: Make informed decisions to improve
configurations and processes
- **🏢 Enterprise Governance**: Monitor usage across teams, track costs, ensure
compliance, and integrate with existing monitoring infrastructure
## OpenTelemetry Integration
Built on **[OpenTelemetry]** — the vendor-neutral, industry-standard
observability framework — Qwen Code's observability system provides:
- **Universal Compatibility**: Export to any OpenTelemetry backend (Aliyun,
Jaeger, Prometheus, Datadog, etc.)
- **Standardized Data**: Use consistent formats and collection methods across
your toolchain
- **Future-Proof Integration**: Connect with existing and future observability
infrastructure
- **No Vendor Lock-in**: Switch between backends without changing your
instrumentation
[OpenTelemetry]: https://opentelemetry.io/
## Configuration
> [!note]
>
> **⚠️ Special Note: This feature requires corresponding code changes. This documentation is provided in advance; please refer to future code updates for actual functionality.**
All telemetry behavior is controlled through your `.qwen/settings.json` file.
These settings can be overridden by environment variables or CLI flags.
| Setting | Environment Variable | CLI Flag | Description | Values | Default |
| -------------- | ------------------------------ | -------------------------------------------------------- | ------------------------------------------------- | ------------------ | ----------------------- |
| `enabled` | `QWEN_TELEMETRY_ENABLED` | `--telemetry` / `--no-telemetry` | Enable or disable telemetry | `true`/`false` | `false` |
| `target` | `QWEN_TELEMETRY_TARGET` | `--telemetry-target <local\|qwen>` | Where to send telemetry data | `"qwen"`/`"local"` | `"local"` |
| `otlpEndpoint` | `QWEN_TELEMETRY_OTLP_ENDPOINT` | `--telemetry-otlp-endpoint <URL>` | OTLP collector endpoint | URL string | `http://localhost:4317` |
| `otlpProtocol` | `QWEN_TELEMETRY_OTLP_PROTOCOL` | `--telemetry-otlp-protocol <grpc\|http>` | OTLP transport protocol | `"grpc"`/`"http"` | `"grpc"` |
| `outfile` | `QWEN_TELEMETRY_OUTFILE` | `--telemetry-outfile <path>` | Save telemetry to file (overrides `otlpEndpoint`) | file path | - |
| `logPrompts` | `QWEN_TELEMETRY_LOG_PROMPTS` | `--telemetry-log-prompts` / `--no-telemetry-log-prompts` | Include prompts in telemetry logs | `true`/`false` | `true` |
| `useCollector` | `QWEN_TELEMETRY_USE_COLLECTOR` | - | Use external OTLP collector (advanced) | `true`/`false` | `false` |
**Note on boolean environment variables:** For the boolean settings (`enabled`,
`logPrompts`, `useCollector`), setting the corresponding environment variable to
`true` or `1` will enable the feature. Any other value will disable it.
For detailed information about all configuration options, see the
[Configuration Guide](./cli/configuration.md).
## Aliyun Telemetry
### Direct Export (Recommended)
Sends telemetry directly to Aliyun services. No collector needed.
1. Enable telemetry in your `.qwen/settings.json`:
```json
{
"telemetry": {
"enabled": true,
"target": "qwen"
}
}
```
2. Run Qwen Code and send prompts.
3. View logs and metrics in the Aliyun Console.
## Local Telemetry
For local development and debugging, you can capture telemetry data locally:
### File-based Output (Recommended)
1. Enable telemetry in your `.qwen/settings.json`:
```json
{
"telemetry": {
"enabled": true,
"target": "local",
"otlpEndpoint": "",
"outfile": ".qwen/telemetry.log"
}
}
```
2. Run Qwen Code and send prompts.
3. View logs and metrics in the specified file (e.g., `.qwen/telemetry.log`).
### Collector-Based Export (Advanced)
1. Run the automation script:
```bash
npm run telemetry -- --target=local
```
This will:
- Download and start Jaeger and OTEL collector
- Configure your workspace for local telemetry
- Provide a Jaeger UI at http://localhost:16686
- Save logs/metrics to `~/.qwen/tmp/<projectHash>/otel/collector.log`
- Stop collector on exit (e.g. `Ctrl+C`)
2. Run Qwen Code and send prompts.
3. View traces at http://localhost:16686 and logs/metrics in the collector log
file.
## Logs and Metrics
The following section describes the structure of logs and metrics generated for
Qwen Code.
- A `sessionId` is included as a common attribute on all logs and metrics.
### Logs
Logs are timestamped records of specific events. The following events are logged for Qwen Code:
- `qwen-code.config`: This event occurs once at startup with the CLI's configuration.
- **Attributes**:
- `model` (string)
- `embedding_model` (string)
- `sandbox_enabled` (boolean)
- `core_tools_enabled` (string)
- `approval_mode` (string)
- `api_key_enabled` (boolean)
- `vertex_ai_enabled` (boolean)
- `code_assist_enabled` (boolean)
- `log_prompts_enabled` (boolean)
- `file_filtering_respect_git_ignore` (boolean)
- `debug_mode` (boolean)
- `mcp_servers` (string)
- `output_format` (string: "text" or "json")
- `qwen-code.user_prompt`: This event occurs when a user submits a prompt.
- **Attributes**:
- `prompt_length` (int)
- `prompt_id` (string)
- `prompt` (string, this attribute is excluded if `log_prompts_enabled` is
configured to be `false`)
- `auth_type` (string)
- `qwen-code.tool_call`: This event occurs for each function call.
- **Attributes**:
- `function_name`
- `function_args`
- `duration_ms`
- `success` (boolean)
- `decision` (string: "accept", "reject", "auto_accept", or "modify", if
applicable)
- `error` (if applicable)
- `error_type` (if applicable)
- `content_length` (int, if applicable)
- `metadata` (if applicable, dictionary of string -> any)
- `qwen-code.file_operation`: This event occurs for each file operation.
- **Attributes**:
- `tool_name` (string)
- `operation` (string: "create", "read", "update")
- `lines` (int, if applicable)
- `mimetype` (string, if applicable)
- `extension` (string, if applicable)
- `programming_language` (string, if applicable)
- `diff_stat` (json string, if applicable): A JSON string with the following members:
- `ai_added_lines` (int)
- `ai_removed_lines` (int)
- `user_added_lines` (int)
- `user_removed_lines` (int)
- `qwen-code.api_request`: This event occurs when making a request to Qwen API.
- **Attributes**:
- `model`
- `request_text` (if applicable)
- `qwen-code.api_error`: This event occurs if the API request fails.
- **Attributes**:
- `model`
- `error`
- `error_type`
- `status_code`
- `duration_ms`
- `auth_type`
- `qwen-code.api_response`: This event occurs upon receiving a response from Qwen API.
- **Attributes**:
- `model`
- `status_code`
- `duration_ms`
- `error` (optional)
- `input_token_count`
- `output_token_count`
- `cached_content_token_count`
- `thoughts_token_count`
- `tool_token_count`
- `response_text` (if applicable)
- `auth_type`
- `qwen-code.tool_output_truncated`: This event occurs when the output of a tool call is too large and gets truncated.
- **Attributes**:
- `tool_name` (string)
- `original_content_length` (int)
- `truncated_content_length` (int)
- `threshold` (int)
- `lines` (int)
- `prompt_id` (string)
- `qwen-code.malformed_json_response`: This event occurs when a `generateJson` response from Qwen API cannot be parsed as a json.
- **Attributes**:
- `model`
- `qwen-code.flash_fallback`: This event occurs when Qwen Code switches to flash as fallback.
- **Attributes**:
- `auth_type`
- `qwen-code.slash_command`: This event occurs when a user executes a slash command.
- **Attributes**:
- `command` (string)
- `subcommand` (string, if applicable)
- `qwen-code.extension_enable`: This event occurs when an extension is enabled
- `qwen-code.extension_install`: This event occurs when an extension is installed
- **Attributes**:
- `extension_name` (string)
- `extension_version` (string)
- `extension_source` (string)
- `status` (string)
- `qwen-code.extension_uninstall`: This event occurs when an extension is uninstalled
### Metrics
Metrics are numerical measurements of behavior over time. The following metrics are collected for Qwen Code (metric names remain `qwen-code.*` for compatibility):
- `qwen-code.session.count` (Counter, Int): Incremented once per CLI startup.
- `qwen-code.tool.call.count` (Counter, Int): Counts tool calls.
- **Attributes**:
- `function_name`
- `success` (boolean)
- `decision` (string: "accept", "reject", or "modify", if applicable)
- `tool_type` (string: "mcp", or "native", if applicable)
- `qwen-code.tool.call.latency` (Histogram, ms): Measures tool call latency.
- **Attributes**:
- `function_name`
- `decision` (string: "accept", "reject", or "modify", if applicable)
- `qwen-code.api.request.count` (Counter, Int): Counts all API requests.
- **Attributes**:
- `model`
- `status_code`
- `error_type` (if applicable)
- `qwen-code.api.request.latency` (Histogram, ms): Measures API request latency.
- **Attributes**:
- `model`
- `qwen-code.token.usage` (Counter, Int): Counts the number of tokens used.
- **Attributes**:
- `model`
- `type` (string: "input", "output", "thought", "cache", or "tool")
- `qwen-code.file.operation.count` (Counter, Int): Counts file operations.
- **Attributes**:
- `operation` (string: "create", "read", "update"): The type of file operation.
- `lines` (Int, if applicable): Number of lines in the file.
- `mimetype` (string, if applicable): Mimetype of the file.
- `extension` (string, if applicable): File extension of the file.
- `model_added_lines` (Int, if applicable): Number of lines added/changed by the model.
- `model_removed_lines` (Int, if applicable): Number of lines removed/changed by the model.
- `user_added_lines` (Int, if applicable): Number of lines added/changed by user in AI proposed changes.
- `user_removed_lines` (Int, if applicable): Number of lines removed/changed by user in AI proposed changes.
- `programming_language` (string, if applicable): The programming language of the file.
- `qwen-code.chat_compression` (Counter, Int): Counts chat compression operations
- **Attributes**:
- `tokens_before`: (Int): Number of tokens in context prior to compression
- `tokens_after`: (Int): Number of tokens in context after compression

View File

@@ -0,0 +1,81 @@
# Example Proxy Script
The following is an example of a proxy script that can be used with the `GEMINI_SANDBOX_PROXY_COMMAND` environment variable. This script only allows `HTTPS` connections to `example.com:443` and declines all other requests.
```javascript
#!/usr/bin/env node
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
// Example proxy server that listens on :::8877 and only allows HTTPS connections to example.com.
// Set `GEMINI_SANDBOX_PROXY_COMMAND=scripts/example-proxy.js` to run proxy alongside sandbox
// Test via `curl https://example.com` inside sandbox (in shell mode or via shell tool)
import http from 'node:http';
import net from 'node:net';
import { URL } from 'node:url';
import console from 'node:console';
const PROXY_PORT = 8877;
const ALLOWED_DOMAINS = ['example.com', 'googleapis.com'];
const ALLOWED_PORT = '443';
const server = http.createServer((req, res) => {
// Deny all requests other than CONNECT for HTTPS
console.log(
`[PROXY] Denying non-CONNECT request for: ${req.method} ${req.url}`,
);
res.writeHead(405, { 'Content-Type': 'text/plain' });
res.end('Method Not Allowed');
});
server.on('connect', (req, clientSocket, head) => {
// req.url will be in the format "hostname:port" for a CONNECT request.
const { port, hostname } = new URL(`http://${req.url}`);
console.log(`[PROXY] Intercepted CONNECT request for: ${hostname}:${port}`);
if (
ALLOWED_DOMAINS.some(
(domain) => hostname == domain || hostname.endsWith(`.${domain}`),
) &&
port === ALLOWED_PORT
) {
console.log(`[PROXY] Allowing connection to ${hostname}:${port}`);
// Establish a TCP connection to the original destination.
const serverSocket = net.connect(port, hostname, () => {
clientSocket.write('HTTP/1.1 200 Connection Established\r\n\r\n');
// Create a tunnel by piping data between the client and the destination server.
serverSocket.write(head);
serverSocket.pipe(clientSocket);
clientSocket.pipe(serverSocket);
});
serverSocket.on('error', (err) => {
console.error(`[PROXY] Error connecting to destination: ${err.message}`);
clientSocket.end(`HTTP/1.1 502 Bad Gateway\r\n\r\n`);
});
} else {
console.log(`[PROXY] Denying connection to ${hostname}:${port}`);
clientSocket.end('HTTP/1.1 403 Forbidden\r\n\r\n');
}
clientSocket.on('error', (err) => {
// This can happen if the client hangs up.
console.error(`[PROXY] Client socket error: ${err.message}`);
});
});
server.listen(PROXY_PORT, () => {
const address = server.address();
console.log(`[PROXY] Proxy listening on ${address.address}:${address.port}`);
console.log(
`[PROXY] Allowing HTTPS connections to domains: ${ALLOWED_DOMAINS.join(', ')}`,
);
});
```

View File

@@ -0,0 +1,121 @@
# Extension Releasing
There are two primary ways of releasing extensions to users:
- [Git repository](#releasing-through-a-git-repository)
- [Github Releases](#releasing-through-github-releases)
Git repository releases tend to be the simplest and most flexible approach, while GitHub releases can be more efficient on initial install as they are shipped as single archives instead of requiring a git clone which downloads each file individually. Github releases may also contain platform specific archives if you need to ship platform specific binary files.
## Releasing through a git repository
This is the most flexible and simple option. All you need to do us create a publicly accessible git repo (such as a public github repository) and then users can install your extension using `qwen extensions install <your-repo-uri>`, or for a GitHub repository they can use the simplified `qwen extensions install <org>/<repo>` format. They can optionally depend on a specific ref (branch/tag/commit) using the `--ref=<some-ref>` argument, this defaults to the default branch.
Whenever commits are pushed to the ref that a user depends on, they will be prompted to update the extension. Note that this also allows for easy rollbacks, the HEAD commit is always treated as the latest version regardless of the actual version in the `qwen-extension.json` file.
### Managing release channels using a git repository
Users can depend on any ref from your git repo, such as a branch or tag, which allows you to manage multiple release channels.
For instance, you can maintain a `stable` branch, which users can install this way `qwen extensions install <your-repo-uri> --ref=stable`. Or, you could make this the default by treating your default branch as your stable release branch, and doing development in a different branch (for instance called `dev`). You can maintain as many branches or tags as you like, providing maximum flexibility for you and your users.
Note that these `ref` arguments can be tags, branches, or even specific commits, which allows users to depend on a specific version of your extension. It is up to you how you want to manage your tags and branches.
### Example releasing flow using a git repo
While there are many options for how you want to manage releases using a git flow, we recommend treating your default branch as your "stable" release branch. This means that the default behavior for `qwen extensions install <your-repo-uri>` is to be on the stable release branch.
Lets say you want to maintain three standard release channels, `stable`, `preview`, and `dev`. You would do all your standard development in the `dev` branch. When you are ready to do a preview release, you merge that branch into your `preview` branch. When you are ready to promote your preview branch to stable, you merge `preview` into your stable branch (which might be your default branch or a different branch).
You can also cherry pick changes from one branch into another using `git cherry-pick`, but do note that this will result in your branches having a slightly divergent history from each other, unless you force push changes to your branches on each release to restore the history to a clean slate (which may not be possible for the default branch depending on your repository settings). If you plan on doing cherry picks, you may want to avoid having your default branch be the stable branch to avoid force-pushing to the default branch which should generally be avoided.
## Releasing through Github releases
Qwen Code extensions can be distributed through [GitHub Releases](https://docs.github.com/en/repositories/releasing-projects-on-github/about-releases). This provides a faster and more reliable initial installation experience for users, as it avoids the need to clone the repository.
Each release includes at least one archive file, which contains the full contents of the repo at the tag that it was linked to. Releases may also include [pre-built archives](#custom-pre-built-archives) if your extension requires some build step or has platform specific binaries attached to it.
When checking for updates, qwen code will just look for the latest release on github (you must mark it as such when creating the release), unless the user installed a specific release by passing `--ref=<some-release-tag>`. We do not at this time support opting in to pre-release releases or semver.
### Custom pre-built archives
Custom archives must be attached directly to the github release as assets and must be fully self-contained. This means they should include the entire extension, see [archive structure](#archive-structure).
If your extension is platform-independent, you can provide a single generic asset. In this case, there should be only one asset attached to the release.
Custom archives may also be used if you want to develop your extension within a larger repository, you can build an archive which has a different layout from the repo itself (for instance it might just be an archive of a subdirectory containing the extension).
#### Platform specific archives
To ensure Qwen Code can automatically find the correct release asset for each platform, you must follow this naming convention. The CLI will search for assets in the following order:
1. **Platform and Architecture-Specific:** `{platform}.{arch}.{name}.{extension}`
2. **Platform-Specific:** `{platform}.{name}.{extension}`
3. **Generic:** If only one asset is provided, it will be used as a generic fallback.
- `{name}`: The name of your extension.
- `{platform}`: The operating system. Supported values are:
- `darwin` (macOS)
- `linux`
- `win32` (Windows)
- `{arch}`: The architecture. Supported values are:
- `x64`
- `arm64`
- `{extension}`: The file extension of the archive (e.g., `.tar.gz` or `.zip`).
**Examples:**
- `darwin.arm64.my-tool.tar.gz` (specific to Apple Silicon Macs)
- `darwin.my-tool.tar.gz` (for all Macs)
- `linux.x64.my-tool.tar.gz`
- `win32.my-tool.zip`
#### Archive structure
Archives must be fully contained extensions and have all the standard requirements - specifically the `qwen-extension.json` file must be at the root of the archive.
The rest of the layout should look exactly the same as a typical extension, see [extensions.md](extension.md).
#### Example GitHub Actions workflow
Here is an example of a GitHub Actions workflow that builds and releases a Qwen Code extension for multiple platforms:
```yaml
name: Release Extension
on:
push:
tags:
- 'v*'
jobs:
release:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
- name: Install dependencies
run: npm ci
- name: Build extension
run: npm run build
- name: Create release assets
run: |
npm run package -- --platform=darwin --arch=arm64
npm run package -- --platform=linux --arch=x64
npm run package -- --platform=win32 --arch=x64
- name: Create GitHub Release
uses: softprops/action-gh-release@v1
with:
files: |
release/darwin.arm64.my-tool.tar.gz
release/linux.arm64.my-tool.tar.gz
release/win32.arm64.my-tool.zip
```

View File

@@ -0,0 +1,158 @@
# Qwen Code Extensions
Qwen Code extensions package prompts, MCP servers, and custom commands into a familiar and user-friendly format. With extensions, you can expand the capabilities of Qwen Code and share those capabilities with others. They are designed to be easily installable and shareable.
## Extension management
We offer a suite of extension management tools using `qwen extensions` commands.
Note that these commands are not supported from within the CLI, although you can list installed extensions using the `/extensions list` subcommand.
Note that all of these commands will only be reflected in active CLI sessions on restart.
### Installing an extension
You can install an extension using `qwen extensions install` with either a GitHub URL or a local path`.
Note that we create a copy of the installed extension, so you will need to run `qwen extensions update` to pull in changes from both locally-defined extensions and those on GitHub.
```
qwen extensions install https://github.com/qwen-cli-extensions/security
```
This will install the Qwen Code Security extension, which offers support for a `/security:analyze` command.
### Uninstalling an extension
To uninstall, run `qwen extensions uninstall extension-name`, so, in the case of the install example:
```
qwen extensions uninstall qwen-cli-security
```
### Disabling an extension
Extensions are, by default, enabled across all workspaces. You can disable an extension entirely or for specific workspace.
For example, `qwen extensions disable extension-name` will disable the extension at the user level, so it will be disabled everywhere. `qwen extensions disable extension-name --scope=workspace` will only disable the extension in the current workspace.
### Enabling an extension
You can enable extensions using `qwen extensions enable extension-name`. You can also enable an extension for a specific workspace using `qwen extensions enable extension-name --scope=workspace` from within that workspace.
This is useful if you have an extension disabled at the top-level and only enabled in specific places.
### Updating an extension
For extensions installed from a local path or a git repository, you can explicitly update to the latest version (as reflected in the `qwen-extension.json` `version` field) with `qwen extensions update extension-name`.
You can update all extensions with:
```
qwen extensions update --all
```
## Extension creation
We offer commands to make extension development easier.
### Create a boilerplate extension
We offer several example extensions `context`, `custom-commands`, `exclude-tools` and `mcp-server`. You can view these examples [here](https://github.com/QwenLM/qwen-code/tree/main/packages/cli/src/commands/extensions/examples).
To copy one of these examples into a development directory using the type of your choosing, run:
```
qwen extensions new path/to/directory custom-commands
```
### Link a local extension
The `qwen extensions link` command will create a symbolic link from the extension installation directory to the development path.
This is useful so you don't have to run `qwen extensions update` every time you make changes you'd like to test.
```
qwen extensions link path/to/directory
```
## How it works
On startup, Qwen Code looks for extensions in `<home>/.qwen/extensions`
Extensions exist as a directory that contains a `qwen-extension.json` file. For example:
`<home>/.qwen/extensions/my-extension/qwen-extension.json`
### `qwen-extension.json`
The `qwen-extension.json` file contains the configuration for the extension. The file has the following structure:
```json
{
"name": "my-extension",
"version": "1.0.0",
"mcpServers": {
"my-server": {
"command": "node my-server.js"
}
},
"contextFileName": "QWEN.md",
"excludeTools": ["run_shell_command"]
}
```
- `name`: The name of the extension. This is used to uniquely identify the extension and for conflict resolution when extension commands have the same name as user or project commands. The name should be lowercase or numbers and use dashes instead of underscores or spaces. This is how users will refer to your extension in the CLI. Note that we expect this name to match the extension directory name.
- `version`: The version of the extension.
- `mcpServers`: A map of MCP servers to configure. The key is the name of the server, and the value is the server configuration. These servers will be loaded on startup just like MCP servers configured in a [`settings.json` file](./cli/configuration.md). If both an extension and a `settings.json` file configure an MCP server with the same name, the server defined in the `settings.json` file takes precedence.
- Note that all MCP server configuration options are supported except for `trust`.
- `contextFileName`: The name of the file that contains the context for the extension. This will be used to load the context from the extension directory. If this property is not used but a `QWEN.md` file is present in your extension directory, then that file will be loaded.
- `excludeTools`: An array of tool names to exclude from the model. You can also specify command-specific restrictions for tools that support it, like the `run_shell_command` tool. For example, `"excludeTools": ["run_shell_command(rm -rf)"]` will block the `rm -rf` command. Note that this differs from the MCP server `excludeTools` functionality, which can be listed in the MCP server config. **Important:** Tools specified in `excludeTools` will be disabled for the entire conversation context and will affect all subsequent queries in the current session.
When Qwen Code starts, it loads all the extensions and merges their configurations. If there are any conflicts, the workspace configuration takes precedence.
### Custom commands
Extensions can provide [custom commands](./cli/commands.md#custom-commands) by placing TOML files in a `commands/` subdirectory within the extension directory. These commands follow the same format as user and project custom commands and use standard naming conventions.
**Example**
An extension named `gcp` with the following structure:
```
.qwen/extensions/gcp/
├── qwen-extension.json
└── commands/
├── deploy.toml
└── gcs/
└── sync.toml
```
Would provide these commands:
- `/deploy` - Shows as `[gcp] Custom command from deploy.toml` in help
- `/gcs:sync` - Shows as `[gcp] Custom command from sync.toml` in help
### Conflict resolution
Extension commands have the lowest precedence. When a conflict occurs with user or project commands:
1. **No conflict**: Extension command uses its natural name (e.g., `/deploy`)
2. **With conflict**: Extension command is renamed with the extension prefix (e.g., `/gcp.deploy`)
For example, if both a user and the `gcp` extension define a `deploy` command:
- `/deploy` - Executes the user's deploy command
- `/gcp.deploy` - Executes the extension's deploy command (marked with `[gcp]` tag)
## Variables
Qwen Code extensions allow variable substitution in `qwen-extension.json`. This can be useful if e.g., you need the current directory to run an MCP server using `"cwd": "${extensionPath}${/}run.ts"`.
**Supported variables:**
| variable | description |
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `${extensionPath}` | The fully-qualified path of the extension in the user's filesystem e.g., '/Users/username/.qwen/extensions/example-extension'. This will not unwrap symlinks. |
| `${workspacePath}` | The fully-qualified path of the current workspace. |
| `${/} or ${pathSeparator}` | The path separator (differs per OS). |

View File

@@ -0,0 +1,213 @@
# Getting Started with Qwen Code Extensions
This guide will walk you through creating your first Qwen Code extension. You'll learn how to set up a new extension, add a custom tool via an MCP server, create a custom command, and provide context to the model with a `QWEN.md` file.
## Prerequisites
Before you start, make sure you have the Qwen Code installed and a basic understanding of Node.js and TypeScript.
## Step 1: Create a New Extension
The easiest way to start is by using one of the built-in templates. We'll use the `mcp-server` example as our foundation.
Run the following command to create a new directory called `my-first-extension` with the template files:
```bash
qwen extensions new my-first-extension mcp-server
```
This will create a new directory with the following structure:
```
my-first-extension/
├── example.ts
├── qwen-extension.json
├── package.json
└── tsconfig.json
```
## Step 2: Understand the Extension Files
Let's look at the key files in your new extension.
### `qwen-extension.json`
This is the manifest file for your extension. It tells Qwen Code how to load and use your extension.
```json
{
"name": "my-first-extension",
"version": "1.0.0",
"mcpServers": {
"nodeServer": {
"command": "node",
"args": ["${extensionPath}${/}dist${/}example.js"],
"cwd": "${extensionPath}"
}
}
}
```
- `name`: The unique name for your extension.
- `version`: The version of your extension.
- `mcpServers`: This section defines one or more Model Context Protocol (MCP) servers. MCP servers are how you can add new tools for the model to use.
- `command`, `args`, `cwd`: These fields specify how to start your server. Notice the use of the `${extensionPath}` variable, which Qwen Code replaces with the absolute path to your extension's installation directory. This allows your extension to work regardless of where it's installed.
### `example.ts`
This file contains the source code for your MCP server. It's a simple Node.js server that uses the `@modelcontextprotocol/sdk`.
```typescript
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';
const server = new McpServer({
name: 'prompt-server',
version: '1.0.0',
});
// Registers a new tool named 'fetch_posts'
server.registerTool(
'fetch_posts',
{
description: 'Fetches a list of posts from a public API.',
inputSchema: z.object({}).shape,
},
async () => {
const apiResponse = await fetch(
'https://jsonplaceholder.typicode.com/posts',
);
const posts = await apiResponse.json();
const response = { posts: posts.slice(0, 5) };
return {
content: [
{
type: 'text',
text: JSON.stringify(response),
},
],
};
},
);
// ... (prompt registration omitted for brevity)
const transport = new StdioServerTransport();
await server.connect(transport);
```
This server defines a single tool called `fetch_posts` that fetches data from a public API.
### `package.json` and `tsconfig.json`
These are standard configuration files for a TypeScript project. The `package.json` file defines dependencies and a `build` script, and `tsconfig.json` configures the TypeScript compiler.
## Step 3: Build and Link Your Extension
Before you can use the extension, you need to compile the TypeScript code and link the extension to your Qwen Code installation for local development.
1. **Install dependencies:**
```bash
cd my-first-extension
npm install
```
2. **Build the server:**
```bash
npm run build
```
This will compile `example.ts` into `dist/example.js`, which is the file referenced in your `qwen-extension.json`.
3. **Link the extension:**
The `link` command creates a symbolic link from the Qwen Code extensions directory to your development directory. This means any changes you make will be reflected immediately without needing to reinstall.
```bash
qwen extensions link .
```
Now, restart your Qwen Code session. The new `fetch_posts` tool will be available. You can test it by asking: "fetch posts".
## Step 4: Add a Custom Command
Custom commands provide a way to create shortcuts for complex prompts. Let's add a command that searches for a pattern in your code.
1. Create a `commands` directory and a subdirectory for your command group:
```bash
mkdir -p commands/fs
```
2. Create a file named `commands/fs/grep-code.toml`:
```toml
prompt = """
Please summarize the findings for the pattern `{{args}}`.
Search Results:
!{grep -r {{args}} .}
"""
```
This command, `/fs:grep-code`, will take an argument, run the `grep` shell command with it, and pipe the results into a prompt for summarization.
After saving the file, restart the Qwen Code. You can now run `/fs:grep-code "some pattern"` to use your new command.
## Step 5: Add a Custom `QWEN.md`
You can provide persistent context to the model by adding a `QWEN.md` file to your extension. This is useful for giving the model instructions on how to behave or information about your extension's tools. Note that you may not always need this for extensions built to expose commands and prompts.
1. Create a file named `QWEN.md` in the root of your extension directory:
```markdown
# My First Extension Instructions
You are an expert developer assistant. When the user asks you to fetch posts, use the `fetch_posts` tool. Be concise in your responses.
```
2. Update your `qwen-extension.json` to tell the CLI to load this file:
```json
{
"name": "my-first-extension",
"version": "1.0.0",
"contextFileName": "QWEN.md",
"mcpServers": {
"nodeServer": {
"command": "node",
"args": ["${extensionPath}${/}dist${/}example.js"],
"cwd": "${extensionPath}"
}
}
}
```
Restart the CLI again. The model will now have the context from your `QWEN.md` file in every session where the extension is active.
## Step 6: Releasing Your Extension
Once you are happy with your extension, you can share it with others. The two primary ways of releasing extensions are via a Git repository or through GitHub Releases. Using a public Git repository is the simplest method.
For detailed instructions on both methods, please refer to the [Extension Releasing Guide](extension-releasing.md).
## Conclusion
You've successfully created a Qwen Code extension! You learned how to:
- Bootstrap a new extension from a template.
- Add custom tools with an MCP server.
- Create convenient custom commands.
- Provide persistent context to the model.
- Link your extension for local development.
From here, you can explore more advanced features and build powerful new capabilities into the Qwen Code.

View File

@@ -0,0 +1,74 @@
# Qwen Code RoadMap
> **Objective**: Catch up with Claude Code's product functionality, continuously refine details, and enhance user experience.
| Category | Phase 1 | Phase 2 |
| ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
| User Experience | ✅ Terminal UI<br>✅ Support OpenAI Protocol<br>✅ Settings<br>✅ OAuth<br>✅ Cache Control<br>✅ Memory<br>✅ Compress<br>✅ Theme | Better UI<br>OnBoarding<br>LogView<br>✅ Session<br>Permission<br>🔄 Cross-platform Compatibility |
| Coding Workflow | ✅ Slash Commands<br>✅ MCP<br>✅ PlanMode<br>✅ TodoWrite<br>✅ SubAgent<br>✅ Multi Model<br>✅ Chat Management<br>✅ Tools (WebFetch, Bash, TextSearch, FileReadFile, EditFile) | 🔄 Hooks<br>SubAgent (enhanced)<br>✅ Skill<br>✅ Headless Mode<br>✅ Tools (WebSearch) |
| Building Open Capabilities | ✅ Custom Commands | ✅ QwenCode SDK<br> Extension |
| Integrating Community Ecosystem | | ✅ VSCode Plugin<br>🔄 ACP/Zed<br>✅ GHA |
| Administrative Capabilities | ✅ Stats<br>✅ Feedback | Costs<br>Dashboard |
> For more details, please see the list below.
## Features
#### Completed Features
| Feature | Version | Description | Category |
| ----------------------- | --------- | ------------------------------------------------------- | ------------------------------- |
| Skill | `V0.6.0` | Extensible custom AI skills | Coding Workflow |
| Github Actions | `V0.5.0` | qwen-code-action and automation | Integrating Community Ecosystem |
| VSCode Plugin | `V0.5.0` | VSCode extension plugin | Integrating Community Ecosystem |
| QwenCode SDK | `V0.4.0` | Open SDK for third-party integration | Building Open Capabilities |
| Session | `V0.4.0` | Enhanced session management | User Experience |
| i18n | `V0.3.0` | Internationalization and multilingual support | User Experience |
| Headless Mode | `V0.3.0` | Headless mode (non-interactive) | Coding Workflow |
| ACP/Zed | `V0.2.0` | ACP and Zed editor integration | Integrating Community Ecosystem |
| Terminal UI | `V0.1.0+` | Interactive terminal user interface | User Experience |
| Settings | `V0.1.0+` | Configuration management system | User Experience |
| Theme | `V0.1.0+` | Multi-theme support | User Experience |
| Support OpenAI Protocol | `V0.1.0+` | Support for OpenAI API protocol | User Experience |
| Chat Management | `V0.1.0+` | Session management (save, restore, browse) | Coding Workflow |
| MCP | `V0.1.0+` | Model Context Protocol integration | Coding Workflow |
| Multi Model | `V0.1.0+` | Multi-model support and switching | Coding Workflow |
| Slash Commands | `V0.1.0+` | Slash command system | Coding Workflow |
| Tool: Bash | `V0.1.0+` | Shell command execution tool (with is_background param) | Coding Workflow |
| Tool: FileRead/EditFile | `V0.1.0+` | File read/write and edit tools | Coding Workflow |
| Custom Commands | `V0.1.0+` | Custom command loading | Building Open Capabilities |
| Feedback | `V0.1.0+` | Feedback mechanism (/bug command) | Administrative Capabilities |
| Stats | `V0.1.0+` | Usage statistics and quota display | Administrative Capabilities |
| Memory | `V0.0.9+` | Project-level and global memory management | User Experience |
| Cache Control | `V0.0.9+` | DashScope cache control | User Experience |
| PlanMode | `V0.0.14` | Task planning mode | Coding Workflow |
| Compress | `V0.0.11` | Chat compression mechanism | User Experience |
| SubAgent | `V0.0.11` | Dedicated sub-agent system | Coding Workflow |
| TodoWrite | `V0.0.10` | Task management and progress tracking | Coding Workflow |
| Tool: TextSearch | `V0.0.8+` | Text search tool (grep, supports .qwenignore) | Coding Workflow |
| Tool: WebFetch | `V0.0.7+` | Web content fetching tool | Coding Workflow |
| Tool: WebSearch | `V0.0.7+` | Web search tool (using Tavily API) | Coding Workflow |
| OAuth | `V0.0.5+` | OAuth login authentication (Qwen OAuth) | User Experience |
#### Features to Develop
| Feature | Priority | Status | Description | Category |
| ---------------------------- | -------- | ----------- | --------------------------------- | --------------------------- |
| Better UI | P1 | Planned | Optimized terminal UI interaction | User Experience |
| OnBoarding | P1 | Planned | New user onboarding flow | User Experience |
| Permission | P1 | Planned | Permission system optimization | User Experience |
| Cross-platform Compatibility | P1 | In Progress | Windows/Linux/macOS compatibility | User Experience |
| LogView | P2 | Planned | Log viewing and debugging feature | User Experience |
| Hooks | P2 | In Progress | Extension hooks system | Coding Workflow |
| Extension | P2 | Planned | Extension system | Building Open Capabilities |
| Costs | P2 | Planned | Cost tracking and analysis | Administrative Capabilities |
| Dashboard | P2 | Planned | Management dashboard | Administrative Capabilities |
#### Distinctive Features to Discuss
| Feature | Status | Description |
| ---------------- | -------- | ----------------------------------------------------- |
| Home Spotlight | Research | Project discovery and quick launch |
| Competitive Mode | Research | Competitive mode |
| Pulse | Research | User activity pulse analysis (OpenAI Pulse reference) |
| Code Wiki | Research | Project codebase wiki/documentation system |

View File

@@ -0,0 +1,375 @@
# Typescript SDK
## @qwen-code/sdk
A minimum experimental TypeScript SDK for programmatic access to Qwen Code.
Feel free to submit a feature request/issue/PR.
## Installation
```bash
npm install @qwen-code/sdk
```
## Requirements
- Node.js >= 20.0.0
- [Qwen Code](https://github.com/QwenLM/qwen-code) >= 0.4.0 (stable) installed and accessible in PATH
> **Note for nvm users**: If you use nvm to manage Node.js versions, the SDK may not be able to auto-detect the Qwen Code executable. You should explicitly set the `pathToQwenExecutable` option to the full path of the `qwen` binary.
## Quick Start
```typescript
import { query } from '@qwen-code/sdk';
// Single-turn query
const result = query({
prompt: 'What files are in the current directory?',
options: {
cwd: '/path/to/project',
},
});
// Iterate over messages
for await (const message of result) {
if (message.type === 'assistant') {
console.log('Assistant:', message.message.content);
} else if (message.type === 'result') {
console.log('Result:', message.result);
}
}
```
## API Reference
### `query(config)`
Creates a new query session with the Qwen Code.
#### Parameters
- `prompt`: `string | AsyncIterable<SDKUserMessage>` - The prompt to send. Use a string for single-turn queries or an async iterable for multi-turn conversations.
- `options`: `QueryOptions` - Configuration options for the query session.
#### QueryOptions
| Option | Type | Default | Description |
| ------------------------ | ---------------------------------------------- | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `cwd` | `string` | `process.cwd()` | The working directory for the query session. Determines the context in which file operations and commands are executed. |
| `model` | `string` | - | The AI model to use (e.g., `'qwen-max'`, `'qwen-plus'`, `'qwen-turbo'`). Takes precedence over `OPENAI_MODEL` and `QWEN_MODEL` environment variables. |
| `pathToQwenExecutable` | `string` | Auto-detected | Path to the Qwen Code executable. Supports multiple formats: `'qwen'` (native binary from PATH), `'/path/to/qwen'` (explicit path), `'/path/to/cli.js'` (Node.js bundle), `'node:/path/to/cli.js'` (force Node.js runtime), `'bun:/path/to/cli.js'` (force Bun runtime). If not provided, auto-detects from: `QWEN_CODE_CLI_PATH` env var, `~/.volta/bin/qwen`, `~/.npm-global/bin/qwen`, `/usr/local/bin/qwen`, `~/.local/bin/qwen`, `~/node_modules/.bin/qwen`, `~/.yarn/bin/qwen`. |
| `permissionMode` | `'default' \| 'plan' \| 'auto-edit' \| 'yolo'` | `'default'` | Permission mode controlling tool execution approval. See [Permission Modes](#permission-modes) for details. |
| `canUseTool` | `CanUseTool` | - | Custom permission handler for tool execution approval. Invoked when a tool requires confirmation. Must respond within 60 seconds or the request will be auto-denied. See [Custom Permission Handler](#custom-permission-handler). |
| `env` | `Record<string, string>` | - | Environment variables to pass to the Qwen Code process. Merged with the current process environment. |
| `mcpServers` | `Record<string, McpServerConfig>` | - | MCP (Model Context Protocol) servers to connect. Supports external servers (stdio/SSE/HTTP) and SDK-embedded servers. External servers are configured with transport options like `command`, `args`, `url`, `httpUrl`, etc. SDK servers use `{ type: 'sdk', name: string, instance: Server }`. |
| `abortController` | `AbortController` | - | Controller to cancel the query session. Call `abortController.abort()` to terminate the session and cleanup resources. |
| `debug` | `boolean` | `false` | Enable debug mode for verbose logging from the CLI process. |
| `maxSessionTurns` | `number` | `-1` (unlimited) | Maximum number of conversation turns before the session automatically terminates. A turn consists of a user message and an assistant response. |
| `coreTools` | `string[]` | - | Equivalent to `tool.core` in settings.json. If specified, only these tools will be available to the AI. Example: `['read_file', 'write_file', 'run_terminal_cmd']`. |
| `excludeTools` | `string[]` | - | Equivalent to `tool.exclude` in settings.json. Excluded tools return a permission error immediately. Takes highest priority over all other permission settings. Supports pattern matching: tool name (`'write_file'`), tool class (`'ShellTool'`), or shell command prefix (`'ShellTool(rm )'`). |
| `allowedTools` | `string[]` | - | Equivalent to `tool.allowed` in settings.json. Matching tools bypass `canUseTool` callback and execute automatically. Only applies when tool requires confirmation. Supports same pattern matching as `excludeTools`. |
| `authType` | `'openai' \| 'qwen-oauth'` | `'openai'` | Authentication type for the AI service. Using `'qwen-oauth'` in SDK is not recommended as credentials are stored in `~/.qwen` and may need periodic refresh. |
| `agents` | `SubagentConfig[]` | - | Configuration for subagents that can be invoked during the session. Subagents are specialized AI agents for specific tasks or domains. |
| `includePartialMessages` | `boolean` | `false` | When `true`, the SDK emits incomplete messages as they are being generated, allowing real-time streaming of the AI's response. |
### Timeouts
The SDK enforces the following default timeouts:
| Timeout | Default | Description |
| ---------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------- |
| `canUseTool` | 1 minute | Maximum time for `canUseTool` callback to respond. If exceeded, the tool request is auto-denied. |
| `mcpRequest` | 1 minute | Maximum time for SDK MCP tool calls to complete. |
| `controlRequest` | 1 minute | Maximum time for control operations like `initialize()`, `setModel()`, `setPermissionMode()`, and `interrupt()` to complete. |
| `streamClose` | 1 minute | Maximum time to wait for initialization to complete before closing CLI stdin in multi-turn mode with SDK MCP servers. |
You can customize these timeouts via the `timeout` option:
```typescript
const query = qwen.query('Your prompt', {
timeout: {
canUseTool: 60000, // 60 seconds for permission callback
mcpRequest: 600000, // 10 minutes for MCP tool calls
controlRequest: 60000, // 60 seconds for control requests
streamClose: 15000, // 15 seconds for stream close wait
},
});
```
### Message Types
The SDK provides type guards to identify different message types:
```typescript
import {
isSDKUserMessage,
isSDKAssistantMessage,
isSDKSystemMessage,
isSDKResultMessage,
isSDKPartialAssistantMessage,
} from '@qwen-code/sdk';
for await (const message of result) {
if (isSDKAssistantMessage(message)) {
// Handle assistant message
} else if (isSDKResultMessage(message)) {
// Handle result message
}
}
```
### Query Instance Methods
The `Query` instance returned by `query()` provides several methods:
```typescript
const q = query({ prompt: 'Hello', options: {} });
// Get session ID
const sessionId = q.getSessionId();
// Check if closed
const closed = q.isClosed();
// Interrupt the current operation
await q.interrupt();
// Change permission mode mid-session
await q.setPermissionMode('yolo');
// Change model mid-session
await q.setModel('qwen-max');
// Close the session
await q.close();
```
## Permission Modes
The SDK supports different permission modes for controlling tool execution:
- **`default`**: Write tools are denied unless approved via `canUseTool` callback or in `allowedTools`. Read-only tools execute without confirmation.
- **`plan`**: Blocks all write tools, instructing AI to present a plan first.
- **`auto-edit`**: Auto-approve edit tools (edit, write_file) while other tools require confirmation.
- **`yolo`**: All tools execute automatically without confirmation.
### Permission Priority Chain
1. `excludeTools` - Blocks tools completely
2. `permissionMode: 'plan'` - Blocks non-read-only tools
3. `permissionMode: 'yolo'` - Auto-approves all tools
4. `allowedTools` - Auto-approves matching tools
5. `canUseTool` callback - Custom approval logic
6. Default behavior - Auto-deny in SDK mode
## Examples
### Multi-turn Conversation
```typescript
import { query, type SDKUserMessage } from '@qwen-code/sdk';
async function* generateMessages(): AsyncIterable<SDKUserMessage> {
yield {
type: 'user',
session_id: 'my-session',
message: { role: 'user', content: 'Create a hello.txt file' },
parent_tool_use_id: null,
};
// Wait for some condition or user input
yield {
type: 'user',
session_id: 'my-session',
message: { role: 'user', content: 'Now read the file back' },
parent_tool_use_id: null,
};
}
const result = query({
prompt: generateMessages(),
options: {
permissionMode: 'auto-edit',
},
});
for await (const message of result) {
console.log(message);
}
```
### Custom Permission Handler
```typescript
import { query, type CanUseTool } from '@qwen-code/sdk';
const canUseTool: CanUseTool = async (toolName, input, { signal }) => {
// Allow all read operations
if (toolName.startsWith('read_')) {
return { behavior: 'allow', updatedInput: input };
}
// Prompt user for write operations (in a real app)
const userApproved = await promptUser(`Allow ${toolName}?`);
if (userApproved) {
return { behavior: 'allow', updatedInput: input };
}
return { behavior: 'deny', message: 'User denied the operation' };
};
const result = query({
prompt: 'Create a new file',
options: {
canUseTool,
},
});
```
### With External MCP Servers
```typescript
import { query } from '@qwen-code/sdk';
const result = query({
prompt: 'Use the custom tool from my MCP server',
options: {
mcpServers: {
'my-server': {
command: 'node',
args: ['path/to/mcp-server.js'],
env: { PORT: '3000' },
},
},
},
});
```
### With SDK-Embedded MCP Servers
The SDK provides `tool` and `createSdkMcpServer` to create MCP servers that run in the same process as your SDK application. This is useful when you want to expose custom tools to the AI without running a separate server process.
#### `tool(name, description, inputSchema, handler)`
Creates a tool definition with Zod schema type inference.
| Parameter | Type | Description |
| ------------- | ---------------------------------- | ------------------------------------------------------------------------ |
| `name` | `string` | Tool name (1-64 chars, starts with letter, alphanumeric and underscores) |
| `description` | `string` | Human-readable description of what the tool does |
| `inputSchema` | `ZodRawShape` | Zod schema object defining the tool's input parameters |
| `handler` | `(args, extra) => Promise<Result>` | Async function that executes the tool and returns MCP content blocks |
The handler must return a `CallToolResult` object with the following structure:
```typescript
{
content: Array<
| { type: 'text'; text: string }
| { type: 'image'; data: string; mimeType: string }
| { type: 'resource'; uri: string; mimeType?: string; text?: string }
>;
isError?: boolean;
}
```
#### `createSdkMcpServer(options)`
Creates an SDK-embedded MCP server instance.
| Option | Type | Default | Description |
| --------- | ------------------------ | --------- | ------------------------------------ |
| `name` | `string` | Required | Unique name for the MCP server |
| `version` | `string` | `'1.0.0'` | Server version |
| `tools` | `SdkMcpToolDefinition[]` | - | Array of tools created with `tool()` |
Returns a `McpSdkServerConfigWithInstance` object that can be passed directly to the `mcpServers` option.
#### Example
```typescript
import { z } from 'zod';
import { query, tool, createSdkMcpServer } from '@qwen-code/sdk';
// Define a tool with Zod schema
const calculatorTool = tool(
'calculate_sum',
'Add two numbers',
{ a: z.number(), b: z.number() },
async (args) => ({
content: [{ type: 'text', text: String(args.a + args.b) }],
}),
);
// Create the MCP server
const server = createSdkMcpServer({
name: 'calculator',
tools: [calculatorTool],
});
// Use the server in a query
const result = query({
prompt: 'What is 42 + 17?',
options: {
permissionMode: 'yolo',
mcpServers: {
calculator: server,
},
},
});
for await (const message of result) {
console.log(message);
}
```
### Abort a Query
```typescript
import { query, isAbortError } from '@qwen-code/sdk';
const abortController = new AbortController();
const result = query({
prompt: 'Long running task...',
options: {
abortController,
},
});
// Abort after 5 seconds
setTimeout(() => abortController.abort(), 5000);
try {
for await (const message of result) {
console.log(message);
}
} catch (error) {
if (isAbortError(error)) {
console.log('Query was aborted');
} else {
throw error;
}
}
```
## Error Handling
The SDK provides an `AbortError` class for handling aborted queries:
```typescript
import { AbortError, isAbortError } from '@qwen-code/sdk';
try {
// ... query operations
} catch (error) {
if (isAbortError(error)) {
// Handle abort
} else {
// Handle other errors
}
}
```

View File

@@ -0,0 +1,13 @@
export default {
introduction: 'Introduction',
'file-system': 'File System',
'multi-file': 'Multi-File Read',
shell: 'Shell',
'todo-write': 'Todo Write',
task: 'Task',
'exit-plan-mode': 'Exit Plan Mode',
'web-fetch': 'Web Fetch',
'web-search': 'Web Search',
memory: 'Memory',
'mcp-server': 'MCP Servers',
};

View File

@@ -0,0 +1,149 @@
# Exit Plan Mode Tool (`exit_plan_mode`)
This document describes the `exit_plan_mode` tool for Qwen Code.
## Description
Use `exit_plan_mode` when you are in plan mode and have finished presenting your implementation plan. This tool prompts the user to approve or reject the plan and transitions from planning mode to implementation mode.
The tool is specifically designed for tasks that require planning implementation steps before writing code. It should NOT be used for research or information-gathering tasks.
### Arguments
`exit_plan_mode` takes one argument:
- `plan` (string, required): The implementation plan you want to present to the user for approval. This should be a concise, markdown-formatted plan describing the implementation steps.
## How to use `exit_plan_mode` with Qwen Code
The Exit Plan Mode tool is part of Qwen Code's planning workflow. When you're in plan mode (typically after exploring a codebase and designing an implementation approach), you use this tool to:
1. Present your implementation plan to the user
2. Request approval to proceed with implementation
3. Transition from plan mode to implementation mode based on user response
The tool will prompt the user with your plan and provide options to:
- **Proceed Once**: Approve the plan for this session only
- **Proceed Always**: Approve the plan and enable auto-approval for future edit operations
- **Cancel**: Reject the plan and remain in planning mode
Usage:
```
exit_plan_mode(plan="Your detailed implementation plan here...")
```
## When to Use This Tool
Use `exit_plan_mode` when:
1. **Implementation tasks**: You are planning the implementation steps for a coding task
2. **Plan completion**: You have finished exploring and designing your implementation approach
3. **User approval needed**: You need user confirmation before proceeding with code changes
4. **Code writing tasks**: The task involves writing, modifying, or refactoring code
### Examples of appropriate usage:
- "Help me implement user authentication" → Use after planning the auth system implementation
- "Add a new API endpoint for user management" → Use after designing the endpoint structure
- "Refactor the database layer to use TypeORM" → Use after planning the refactoring approach
## When NOT to Use This Tool
Do NOT use `exit_plan_mode` for:
1. **Research tasks**: Tasks focused on understanding or exploring existing code
2. **Information gathering**: When you're searching, reading, or analyzing without implementing
3. **Documentation tasks**: When creating documentation without code changes
4. **Analysis tasks**: When providing explanations or reviews without implementation
### Examples of inappropriate usage:
- "Search for and understand the implementation of vim mode in the codebase" → Don't use (research task)
- "Explain how the authentication system works" → Don't use (analysis task)
- "Find all TODO comments in the project" → Don't use (information gathering)
## `exit_plan_mode` examples
### Implementation planning example
```
exit_plan_mode(plan="## Implementation Plan for User Dashboard
### 1. Create Dashboard Components
- Create `UserDashboard.tsx` with main dashboard layout
- Add `DashboardStats.tsx` for displaying user statistics
- Implement `ActivityFeed.tsx` for recent user activities
### 2. Add API Integration
- Create `dashboardApi.ts` for dashboard data fetching
- Add endpoints for user stats and activity data
- Implement error handling and loading states
### 3. Update Routing
- Add dashboard route in `App.tsx`
- Update navigation component with dashboard link
- Add route protection for authenticated users
### 4. Testing and Documentation
- Write unit tests for dashboard components
- Add integration tests for API calls
- Update user documentation with dashboard features
This implementation will take approximately 4-6 hours and requires changes to 8-10 files.")
```
### Refactoring plan example
```
exit_plan_mode(plan="## Database Layer Refactoring Plan
### Phase 1: Setup and Configuration
- Install and configure TypeORM dependencies
- Create database connection configuration
- Set up entity definitions for existing models
### Phase 2: Migration
- Create migration scripts for existing data
- Implement new repository pattern using TypeORM
- Replace raw SQL queries with TypeORM query builder
### Phase 3: Testing and Validation
- Update all database tests to use TypeORM
- Validate data integrity after migration
- Performance testing to ensure no regressions
This refactoring will modernize our database layer while maintaining backward compatibility.")
```
## User Response Handling
After calling `exit_plan_mode`, the user can respond in several ways:
- **Proceed Once**: The plan is approved for immediate implementation with default confirmation settings
- **Proceed Always**: The plan is approved and auto-approval is enabled for subsequent edit operations
- **Cancel**: The plan is rejected, and the system remains in plan mode for further planning
The tool automatically adjusts the approval mode based on the user's choice, streamlining the implementation process according to user preferences.
## Important Notes
- **Plan mode only**: This tool should only be used when you are currently in plan mode
- **Implementation focus**: Only use for tasks that involve writing or modifying code
- **Concise plans**: Keep plans focused and concise - aim for clarity over exhaustive detail
- **Markdown support**: Plans support markdown formatting for better readability
- **Single use**: The tool should be used once per planning session when ready to proceed
- **User control**: The final decision to proceed always rests with the user
## Integration with Planning Workflow
The Exit Plan Mode tool is part of a larger planning workflow:
1. **Enter Plan Mode**: User requests or system determines planning is needed
2. **Exploration Phase**: Analyze codebase, understand requirements, explore options
3. **Plan Design**: Create implementation strategy based on exploration
4. **Plan Presentation**: Use `exit_plan_mode` to present plan to user
5. **Implementation Phase**: Upon approval, proceed with planned implementation
This workflow ensures thoughtful implementation approaches and gives users control over significant code changes.

View File

@@ -0,0 +1,167 @@
# Qwen Code file system tools
Qwen Code provides a comprehensive suite of tools for interacting with the local file system. These tools allow the model to read from, write to, list, search, and modify files and directories, all under your control and typically with confirmation for sensitive operations.
**Note:** All file system tools operate within a `rootDirectory` (usually the current working directory where you launched the CLI) for security. Paths that you provide to these tools are generally expected to be absolute or are resolved relative to this root directory.
## 1. `list_directory` (ListFiles)
`list_directory` lists the names of files and subdirectories directly within a specified directory path. It can optionally ignore entries matching provided glob patterns.
- **Tool name:** `list_directory`
- **Display name:** ListFiles
- **File:** `ls.ts`
- **Parameters:**
- `path` (string, required): The absolute path to the directory to list.
- `ignore` (array of strings, optional): A list of glob patterns to exclude from the listing (e.g., `["*.log", ".git"]`).
- `respect_git_ignore` (boolean, optional): Whether to respect `.gitignore` patterns when listing files. Defaults to `true`.
- **Behavior:**
- Returns a list of file and directory names.
- Indicates whether each entry is a directory.
- Sorts entries with directories first, then alphabetically.
- **Output (`llmContent`):** A string like: `Directory listing for /path/to/your/folder:\n[DIR] subfolder1\nfile1.txt\nfile2.png`
- **Confirmation:** No.
## 2. `read_file` (ReadFile)
`read_file` reads and returns the content of a specified file. This tool handles text, images (PNG, JPG, GIF, WEBP, SVG, BMP), and PDF files. For text files, it can read specific line ranges. Other binary file types are generally skipped.
- **Tool name:** `read_file`
- **Display name:** ReadFile
- **File:** `read-file.ts`
- **Parameters:**
- `path` (string, required): The absolute path to the file to read.
- `offset` (number, optional): For text files, the 0-based line number to start reading from. Requires `limit` to be set.
- `limit` (number, optional): For text files, the maximum number of lines to read. If omitted, reads a default maximum (e.g., 2000 lines) or the entire file if feasible.
- **Behavior:**
- For text files: Returns the content. If `offset` and `limit` are used, returns only that slice of lines. Indicates if content was truncated due to line limits or line length limits.
- For image and PDF files: Returns the file content as a base64-encoded data structure suitable for model consumption.
- For other binary files: Attempts to identify and skip them, returning a message indicating it's a generic binary file.
- **Output:** (`llmContent`):
- For text files: The file content, potentially prefixed with a truncation message (e.g., `[File content truncated: showing lines 1-100 of 500 total lines...]\nActual file content...`).
- For image/PDF files: An object containing `inlineData` with `mimeType` and base64 `data` (e.g., `{ inlineData: { mimeType: 'image/png', data: 'base64encodedstring' } }`).
- For other binary files: A message like `Cannot display content of binary file: /path/to/data.bin`.
- **Confirmation:** No.
## 3. `write_file` (WriteFile)
`write_file` writes content to a specified file. If the file exists, it will be overwritten. If the file doesn't exist, it (and any necessary parent directories) will be created.
- **Tool name:** `write_file`
- **Display name:** WriteFile
- **File:** `write-file.ts`
- **Parameters:**
- `file_path` (string, required): The absolute path to the file to write to.
- `content` (string, required): The content to write into the file.
- **Behavior:**
- Writes the provided `content` to the `file_path`.
- Creates parent directories if they don't exist.
- **Output (`llmContent`):** A success message, e.g., `Successfully overwrote file: /path/to/your/file.txt` or `Successfully created and wrote to new file: /path/to/new/file.txt`.
- **Confirmation:** Yes. Shows a diff of changes and asks for user approval before writing.
## 4. `glob` (Glob)
`glob` finds files matching specific glob patterns (e.g., `src/**/*.ts`, `*.md`), returning absolute paths sorted by modification time (newest first).
- **Tool name:** `glob`
- **Display name:** Glob
- **File:** `glob.ts`
- **Parameters:**
- `pattern` (string, required): The glob pattern to match against (e.g., `"*.py"`, `"src/**/*.js"`).
- `path` (string, optional): The directory to search in. If not specified, the current working directory will be used.
- **Behavior:**
- Searches for files matching the glob pattern within the specified directory.
- Returns a list of absolute paths, sorted with the most recently modified files first.
- Respects .gitignore and .qwenignore patterns by default.
- Limits results to 100 files to prevent context overflow.
- **Output (`llmContent`):** A message like: `Found 5 file(s) matching "*.ts" within /path/to/search/dir, sorted by modification time (newest first):\n---\n/path/to/file1.ts\n/path/to/subdir/file2.ts\n---\n[95 files truncated] ...`
- **Confirmation:** No.
## 5. `grep_search` (Grep)
`grep_search` searches for a regular expression pattern within the content of files in a specified directory. Can filter files by a glob pattern. Returns the lines containing matches, along with their file paths and line numbers.
- **Tool name:** `grep_search`
- **Display name:** Grep
- **File:** `grep.ts` (with `ripGrep.ts` as fallback)
- **Parameters:**
- `pattern` (string, required): The regular expression pattern to search for in file contents (e.g., `"function\\s+myFunction"`, `"log.*Error"`).
- `path` (string, optional): File or directory to search in. Defaults to current working directory.
- `glob` (string, optional): Glob pattern to filter files (e.g. `"*.js"`, `"src/**/*.{ts,tsx}"`).
- `limit` (number, optional): Limit output to first N matching lines. Optional - shows all matches if not specified.
- **Behavior:**
- Uses ripgrep for fast search when available; otherwise falls back to a JavaScript-based search implementation.
- Returns matching lines with file paths and line numbers.
- Case-insensitive by default.
- Respects .gitignore and .qwenignore patterns.
- Limits output to prevent context overflow.
- **Output (`llmContent`):** A formatted string of matches, e.g.:
```
Found 3 matches for pattern "myFunction" in path "." (filter: "*.ts"):
---
src/utils.ts:15:export function myFunction() {
src/utils.ts:22: myFunction.call();
src/index.ts:5:import { myFunction } from './utils';
---
[0 lines truncated] ...
```
- **Confirmation:** No.
### `grep_search` examples
Search for a pattern with default result limiting:
```
grep_search(pattern="function\\s+myFunction", path="src")
```
Search for a pattern with custom result limiting:
```
grep_search(pattern="function", path="src", limit=50)
```
Search for a pattern with file filtering and custom result limiting:
```
grep_search(pattern="function", glob="*.js", limit=10)
```
## 6. `edit` (Edit)
`edit` replaces text within a file. By default it requires `old_string` to match a single unique location; set `replace_all` to `true` when you intentionally want to change every occurrence. This tool is designed for precise, targeted changes and requires significant context around the `old_string` to ensure it modifies the correct location.
- **Tool name:** `edit`
- **Display name:** Edit
- **File:** `edit.ts`
- **Parameters:**
- `file_path` (string, required): The absolute path to the file to modify.
- `old_string` (string, required): The exact literal text to replace.
**CRITICAL:** This string must uniquely identify the single instance to change. It should include sufficient context around the target text, matching whitespace and indentation precisely. If `old_string` is empty, the tool attempts to create a new file at `file_path` with `new_string` as content.
- `new_string` (string, required): The exact literal text to replace `old_string` with.
- `replace_all` (boolean, optional): Replace all occurrences of `old_string`. Defaults to `false`.
- **Behavior:**
- If `old_string` is empty and `file_path` does not exist, creates a new file with `new_string` as content.
- If `old_string` is provided, it reads the `file_path` and attempts to find exactly one occurrence unless `replace_all` is true.
- If the match is unique (or `replace_all` is true), it replaces the text with `new_string`.
- **Enhanced Reliability (Multi-Stage Edit Correction):** To significantly improve the success rate of edits, especially when the model-provided `old_string` might not be perfectly precise, the tool incorporates a multi-stage edit correction mechanism.
- If the initial `old_string` isn't found or matches multiple locations, the tool can leverage the Qwen model to iteratively refine `old_string` (and potentially `new_string`).
- This self-correction process attempts to identify the unique segment the model intended to modify, making the `edit` operation more robust even with slightly imperfect initial context.
- **Failure conditions:** Despite the correction mechanism, the tool will fail if:
- `file_path` is not absolute or is outside the root directory.
- `old_string` is not empty, but the `file_path` does not exist.
- `old_string` is empty, but the `file_path` already exists.
- `old_string` is not found in the file after attempts to correct it.
- `old_string` is found multiple times, `replace_all` is false, and the self-correction mechanism cannot resolve it to a single, unambiguous match.
- **Output (`llmContent`):**
- On success: `Successfully modified file: /path/to/file.txt (1 replacements).` or `Created new file: /path/to/new_file.txt with provided content.`
- On failure: An error message explaining the reason (e.g., `Failed to edit, 0 occurrences found...`, `Failed to edit because the text matches multiple locations...`).
- **Confirmation:** Yes. Shows a diff of the proposed changes and asks for user approval before writing to the file.
These file system tools provide a foundation for Qwen Code to understand and interact with your local project context.

View File

@@ -0,0 +1,62 @@
# Qwen Code tools
Qwen Code includes built-in tools that the model uses to interact with your local environment, access information, and perform actions. These tools enhance the CLI's capabilities, enabling it to go beyond text generation and assist with a wide range of tasks.
## Overview of Qwen Code tools
In the context of Qwen Code, tools are specific functions or modules that the model can request to be executed. For example, if you ask the model to "Summarize the contents of `my_document.txt`," it will likely identify the need to read that file and will request the execution of the `read_file` tool.
The core component (`packages/core`) manages these tools, presents their definitions (schemas) to the model, executes them when requested, and returns the results to the model for further processing into a user-facing response.
These tools provide the following capabilities:
- **Access local information:** Tools allow the model to access your local file system, read file contents, list directories, etc.
- **Execute commands:** With tools like `run_shell_command`, the model can run shell commands (with appropriate safety measures and user confirmation).
- **Interact with the web:** Tools can fetch content from URLs.
- **Take actions:** Tools can modify files, write new files, or perform other actions on your system (again, typically with safeguards).
- **Ground responses:** By using tools to fetch real-time or specific local data, responses can be more accurate, relevant, and grounded in your actual context.
## How to use Qwen Code tools
To use Qwen Code tools, provide a prompt to the CLI. The process works as follows:
1. You provide a prompt to the CLI.
2. The CLI sends the prompt to the core.
3. The core, along with your prompt and conversation history, sends a list of available tools and their descriptions/schemas to the configured model API.
4. The model analyzes your request. If it determines that a tool is needed, its response will include a request to execute a specific tool with certain parameters.
5. The core receives this tool request, validates it, and (often after user confirmation for sensitive operations) executes the tool.
6. The output from the tool is sent back to the model.
7. The model uses the tool's output to formulate its final answer, which is then sent back through the core to the CLI and displayed to you.
You will typically see messages in the CLI indicating when a tool is being called and whether it succeeded or failed.
## Security and confirmation
Many tools, especially those that can modify your file system or execute commands (`write_file`, `edit`, `run_shell_command`), are designed with safety in mind. Qwen Code will typically:
- **Require confirmation:** Prompt you before executing potentially sensitive operations, showing you what action is about to be taken.
- **Utilize sandboxing:** All tools are subject to restrictions enforced by sandboxing (see [Sandboxing in Qwen Code](../sandbox.md)). This means that when operating in a sandbox, any tools (including MCP servers) you wish to use must be available _inside_ the sandbox environment. For example, to run an MCP server through `npx`, the `npx` executable must be installed within the sandbox's Docker image or be available in the `sandbox-exec` environment.
It's important to always review confirmation prompts carefully before allowing a tool to proceed.
## Learn more about Qwen Code's tools
Qwen Code's built-in tools can be broadly categorized as follows:
- **[File System Tools](./file-system.md):** For interacting with files and directories (reading, writing, listing, searching, etc.).
- **[Shell Tool](./shell.md) (`run_shell_command`):** For executing shell commands.
- **[Web Fetch Tool](./web-fetch.md) (`web_fetch`):** For retrieving content from URLs.
- **[Web Search Tool](./web-search.md) (`web_search`):** For searching the web.
- **[Multi-File Read Tool](./multi-file.md) (`read_many_files`):** A specialized tool for reading content from multiple files or directories, often used by the `@` command.
- **[Memory Tool](./memory.md) (`save_memory`):** For saving and recalling information across sessions.
- **[Todo Write Tool](./todo-write.md) (`todo_write`):** For creating and managing structured task lists during coding sessions.
- **[Task Tool](./task.md) (`task`):** For delegating complex tasks to specialized subagents.
- **[Exit Plan Mode Tool](./exit-plan-mode.md) (`exit_plan_mode`):** For exiting plan mode and proceeding with implementation.
Additionally, these tools incorporate:
- **[MCP servers](./mcp-server.md)**: MCP servers act as a bridge between the model and your local environment or other services like APIs.
- **[MCP Quick Start Guide](../mcp-quick-start.md)**: Get started with MCP in 5 minutes with practical examples
- **[MCP Example Configurations](../mcp-example-configs.md)**: Ready-to-use configurations for common scenarios
- **[MCP Testing & Validation](../mcp-testing-validation.md)**: Test and validate your MCP server setups
- **[Sandboxing](../sandbox.md)**: Sandboxing isolates the model and its changes from your environment to reduce potential risk.

View File

@@ -0,0 +1,871 @@
# MCP servers with Qwen Code
This document provides a guide to configuring and using Model Context Protocol (MCP) servers with Qwen Code.
## What is an MCP server?
An MCP server is an application that exposes tools and resources to the CLI through the Model Context Protocol, allowing it to interact with external systems and data sources. MCP servers act as a bridge between the model and your local environment or other services like APIs.
An MCP server enables the CLI to:
- **Discover tools:** List available tools, their descriptions, and parameters through standardized schema definitions.
- **Execute tools:** Call specific tools with defined arguments and receive structured responses.
- **Access resources:** Read data from specific resources (though the CLI primarily focuses on tool execution).
With an MCP server, you can extend the CLI's capabilities to perform actions beyond its built-in features, such as interacting with databases, APIs, custom scripts, or specialized workflows.
## Core Integration Architecture
Qwen Code integrates with MCP servers through a sophisticated discovery and execution system built into the core package (`packages/core/src/tools/`):
### Discovery Layer (`mcp-client.ts`)
The discovery process is orchestrated by `discoverMcpTools()`, which:
1. **Iterates through configured servers** from your `settings.json` `mcpServers` configuration
2. **Establishes connections** using appropriate transport mechanisms (Stdio, SSE, or Streamable HTTP)
3. **Fetches tool definitions** from each server using the MCP protocol
4. **Sanitizes and validates** tool schemas for compatibility with the Qwen API
5. **Registers tools** in the global tool registry with conflict resolution
### Execution Layer (`mcp-tool.ts`)
Each discovered MCP tool is wrapped in a `DiscoveredMCPTool` instance that:
- **Handles confirmation logic** based on server trust settings and user preferences
- **Manages tool execution** by calling the MCP server with proper parameters
- **Processes responses** for both the LLM context and user display
- **Maintains connection state** and handles timeouts
### Transport Mechanisms
The CLI supports three MCP transport types:
- **Stdio Transport:** Spawns a subprocess and communicates via stdin/stdout
- **SSE Transport:** Connects to Server-Sent Events endpoints
- **Streamable HTTP Transport:** Uses HTTP streaming for communication
## How to set up your MCP server
Qwen Code uses the `mcpServers` configuration in your `settings.json` file to locate and connect to MCP servers. This configuration supports multiple servers with different transport mechanisms.
### Configure the MCP server in settings.json
You can configure MCP servers in your `settings.json` file in two main ways: through the top-level `mcpServers` object for specific server definitions, and through the `mcp` object for global settings that control server discovery and execution.
#### Global MCP Settings (`mcp`)
The `mcp` object in your `settings.json` allows you to define global rules for all MCP servers.
- **`mcp.serverCommand`** (string): A global command to start an MCP server.
- **`mcp.allowed`** (array of strings): A list of MCP server names to allow. If this is set, only servers from this list (matching the keys in the `mcpServers` object) will be connected to.
- **`mcp.excluded`** (array of strings): A list of MCP server names to exclude. Servers in this list will not be connected to.
**Example:**
```json
{
"mcp": {
"allowed": ["my-trusted-server"],
"excluded": ["experimental-server"]
}
}
```
#### Server-Specific Configuration (`mcpServers`)
The `mcpServers` object is where you define each individual MCP server you want the CLI to connect to.
### Configuration Structure
Add an `mcpServers` object to your `settings.json` file:
```json
{ ...file contains other config objects
"mcpServers": {
"serverName": {
"command": "path/to/server",
"args": ["--arg1", "value1"],
"env": {
"API_KEY": "$MY_API_TOKEN"
},
"cwd": "./server-directory",
"timeout": 30000,
"trust": false
}
}
}
```
### Configuration Properties
Each server configuration supports the following properties:
#### Required (one of the following)
- **`command`** (string): Path to the executable for Stdio transport
- **`url`** (string): SSE endpoint URL (e.g., `"http://localhost:8080/sse"`)
- **`httpUrl`** (string): HTTP streaming endpoint URL
#### Optional
- **`args`** (string[]): Command-line arguments for Stdio transport
- **`headers`** (object): Custom HTTP headers when using `url` or `httpUrl`
- **`env`** (object): Environment variables for the server process. Values can reference environment variables using `$VAR_NAME` or `${VAR_NAME}` syntax
- **`cwd`** (string): Working directory for Stdio transport
- **`timeout`** (number): Request timeout in milliseconds (default: 600,000ms = 10 minutes)
- **`trust`** (boolean): When `true`, bypasses all tool call confirmations for this server (default: `false`)
- **`includeTools`** (string[]): List of tool names to include from this MCP server. When specified, only the tools listed here will be available from this server (allowlist behavior). If not specified, all tools from the server are enabled by default.
- **`excludeTools`** (string[]): List of tool names to exclude from this MCP server. Tools listed here will not be available to the model, even if they are exposed by the server. **Note:** `excludeTools` takes precedence over `includeTools` - if a tool is in both lists, it will be excluded.
- **`targetAudience`** (string): The OAuth Client ID allowlisted on the IAP-protected application you are trying to access. Used with `authProviderType: 'service_account_impersonation'`.
- **`targetServiceAccount`** (string): The email address of the Google Cloud Service Account to impersonate. Used with `authProviderType: 'service_account_impersonation'`.
### OAuth Support for Remote MCP Servers
Qwen Code supports OAuth 2.0 authentication for remote MCP servers using SSE or HTTP transports. This enables secure access to MCP servers that require authentication.
#### Automatic OAuth Discovery
For servers that support OAuth discovery, you can omit the OAuth configuration and let the CLI discover it automatically:
```json
{
"mcpServers": {
"discoveredServer": {
"url": "https://api.example.com/sse"
}
}
}
```
The CLI will automatically:
- Detect when a server requires OAuth authentication (401 responses)
- Discover OAuth endpoints from server metadata
- Perform dynamic client registration if supported
- Handle the OAuth flow and token management
#### Authentication Flow
When connecting to an OAuth-enabled server:
1. **Initial connection attempt** fails with 401 Unauthorized
2. **OAuth discovery** finds authorization and token endpoints
3. **Browser opens** for user authentication (requires local browser access)
4. **Authorization code** is exchanged for access tokens
5. **Tokens are stored** securely for future use
6. **Connection retry** succeeds with valid tokens
#### Browser Redirect Requirements
**Important:** OAuth authentication requires that your local machine can:
- Open a web browser for authentication
- Receive redirects on `http://localhost:7777/oauth/callback`
This feature will not work in:
- Headless environments without browser access
- Remote SSH sessions without X11 forwarding
- Containerized environments without browser support
#### Managing OAuth Authentication
Use the `/mcp auth` command to manage OAuth authentication:
```bash
# List servers requiring authentication
/mcp auth
# Authenticate with a specific server
/mcp auth serverName
# Re-authenticate if tokens expire
/mcp auth serverName
```
#### OAuth Configuration Properties
- **`enabled`** (boolean): Enable OAuth for this server
- **`clientId`** (string): OAuth client identifier (optional with dynamic registration)
- **`clientSecret`** (string): OAuth client secret (optional for public clients)
- **`authorizationUrl`** (string): OAuth authorization endpoint (auto-discovered if omitted)
- **`tokenUrl`** (string): OAuth token endpoint (auto-discovered if omitted)
- **`scopes`** (string[]): Required OAuth scopes
- **`redirectUri`** (string): Custom redirect URI (defaults to `http://localhost:7777/oauth/callback`)
- **`tokenParamName`** (string): Query parameter name for tokens in SSE URLs
- **`audiences`** (string[]): Audiences the token is valid for
#### Token Management
OAuth tokens are automatically:
- **Stored securely** in `~/.qwen/mcp-oauth-tokens.json`
- **Refreshed** when expired (if refresh tokens are available)
- **Validated** before each connection attempt
- **Cleaned up** when invalid or expired
#### Authentication Provider Type
You can specify the authentication provider type using the `authProviderType` property:
- **`authProviderType`** (string): Specifies the authentication provider. Can be one of the following:
- **`dynamic_discovery`** (default): The CLI will automatically discover the OAuth configuration from the server.
- **`google_credentials`**: The CLI will use the Google Application Default Credentials (ADC) to authenticate with the server. When using this provider, you must specify the required scopes.
- **`service_account_impersonation`**: The CLI will impersonate a Google Cloud Service Account to authenticate with the server. This is useful for accessing IAP-protected services (this was specifically designed for Cloud Run services).
#### Google Credentials
```json
{
"mcpServers": {
"googleCloudServer": {
"httpUrl": "https://my-gcp-service.run.app/mcp",
"authProviderType": "google_credentials",
"oauth": {
"scopes": ["https://www.googleapis.com/auth/userinfo.email"]
}
}
}
}
```
#### Service Account Impersonation
To authenticate with a server using Service Account Impersonation, you must set the `authProviderType` to `service_account_impersonation` and provide the following properties:
- **`targetAudience`** (string): The OAuth Client ID allowslisted on the IAP-protected application you are trying to access.
- **`targetServiceAccount`** (string): The email address of the Google Cloud Service Account to impersonate.
The CLI will use your local Application Default Credentials (ADC) to generate an OIDC ID token for the specified service account and audience. This token will then be used to authenticate with the MCP server.
#### Setup Instructions
1. **[Create](https://cloud.google.com/iap/docs/oauth-client-creation) or use an existing OAuth 2.0 client ID.** To use an existing OAuth 2.0 client ID, follow the steps in [How to share OAuth Clients](https://cloud.google.com/iap/docs/sharing-oauth-clients).
2. **Add the OAuth ID to the allowlist for [programmatic access](https://cloud.google.com/iap/docs/sharing-oauth-clients#programmatic_access) for the application.** Since Cloud Run is not yet a supported resource type in gcloud iap, you must allowlist the Client ID on the project.
3. **Create a service account.** [Documentation](https://cloud.google.com/iam/docs/service-accounts-create#creating), [Cloud Console Link](https://console.cloud.google.com/iam-admin/serviceaccounts)
4. **Add both the service account and users to the IAP Policy** in the "Security" tab of the Cloud Run service itself or via gcloud.
5. **Grant all users and groups** who will access the MCP Server the necessary permissions to [impersonate the service account](https://cloud.google.com/docs/authentication/use-service-account-impersonation) (i.e., `roles/iam.serviceAccountTokenCreator`).
6. **[Enable](https://console.cloud.google.com/apis/library/iamcredentials.googleapis.com) the IAM Credentials API** for your project.
### Example Configurations
#### Python MCP Server (Stdio)
```json
{
"mcpServers": {
"pythonTools": {
"command": "python",
"args": ["-m", "my_mcp_server", "--port", "8080"],
"cwd": "./mcp-servers/python",
"env": {
"DATABASE_URL": "$DB_CONNECTION_STRING",
"API_KEY": "${EXTERNAL_API_KEY}"
},
"timeout": 15000
}
}
}
```
#### Node.js MCP Server (Stdio)
```json
{
"mcpServers": {
"nodeServer": {
"command": "node",
"args": ["dist/server.js", "--verbose"],
"cwd": "./mcp-servers/node",
"trust": true
}
}
}
```
#### Docker-based MCP Server
```json
{
"mcpServers": {
"dockerizedServer": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"API_KEY",
"-v",
"${PWD}:/workspace",
"my-mcp-server:latest"
],
"env": {
"API_KEY": "$EXTERNAL_SERVICE_TOKEN"
}
}
}
}
```
#### HTTP-based MCP Server
```json
{
"mcpServers": {
"httpServer": {
"httpUrl": "http://localhost:3000/mcp",
"timeout": 5000
}
}
}
```
#### HTTP-based MCP Server with Custom Headers
```json
{
"mcpServers": {
"httpServerWithAuth": {
"httpUrl": "http://localhost:3000/mcp",
"headers": {
"Authorization": "Bearer your-api-token",
"X-Custom-Header": "custom-value",
"Content-Type": "application/json"
},
"timeout": 5000
}
}
}
```
#### MCP Server with Tool Filtering
```json
{
"mcpServers": {
"filteredServer": {
"command": "python",
"args": ["-m", "my_mcp_server"],
"includeTools": ["safe_tool", "file_reader", "data_processor"],
// "excludeTools": ["dangerous_tool", "file_deleter"],
"timeout": 30000
}
}
}
```
### SSE MCP Server with SA Impersonation
```json
{
"mcpServers": {
"myIapProtectedServer": {
"url": "https://my-iap-service.run.app/sse",
"authProviderType": "service_account_impersonation",
"targetAudience": "YOUR_IAP_CLIENT_ID.apps.googleusercontent.com",
"targetServiceAccount": "your-sa@your-project.iam.gserviceaccount.com"
}
}
}
```
## Discovery Process Deep Dive
When Qwen Code starts, it performs MCP server discovery through the following detailed process:
### 1. Server Iteration and Connection
For each configured server in `mcpServers`:
1. **Status tracking begins:** Server status is set to `CONNECTING`
2. **Transport selection:** Based on configuration properties:
- `httpUrl``StreamableHTTPClientTransport`
- `url``SSEClientTransport`
- `command``StdioClientTransport`
3. **Connection establishment:** The MCP client attempts to connect with the configured timeout
4. **Error handling:** Connection failures are logged and the server status is set to `DISCONNECTED`
### 2. Tool Discovery
Upon successful connection:
1. **Tool listing:** The client calls the MCP server's tool listing endpoint
2. **Schema validation:** Each tool's function declaration is validated
3. **Tool filtering:** Tools are filtered based on `includeTools` and `excludeTools` configuration
4. **Name sanitization:** Tool names are cleaned to meet Qwen API requirements:
- Invalid characters (non-alphanumeric, underscore, dot, hyphen) are replaced with underscores
- Names longer than 63 characters are truncated with middle replacement (`___`)
### 3. Conflict Resolution
When multiple servers expose tools with the same name:
1. **First registration wins:** The first server to register a tool name gets the unprefixed name
2. **Automatic prefixing:** Subsequent servers get prefixed names: `serverName__toolName`
3. **Registry tracking:** The tool registry maintains mappings between server names and their tools
### 4. Schema Processing
Tool parameter schemas undergo sanitization for API compatibility:
- **`$schema` properties** are removed
- **`additionalProperties`** are stripped
- **`anyOf` with `default`** have their default values removed (Vertex AI compatibility)
- **Recursive processing** applies to nested schemas
### 5. Connection Management
After discovery:
- **Persistent connections:** Servers that successfully register tools maintain their connections
- **Cleanup:** Servers that provide no usable tools have their connections closed
- **Status updates:** Final server statuses are set to `CONNECTED` or `DISCONNECTED`
## Tool Execution Flow
When the model decides to use an MCP tool, the following execution flow occurs:
### 1. Tool Invocation
The model generates a `FunctionCall` with:
- **Tool name:** The registered name (potentially prefixed)
- **Arguments:** JSON object matching the tool's parameter schema
### 2. Confirmation Process
Each `DiscoveredMCPTool` implements sophisticated confirmation logic:
#### Trust-based Bypass
```typescript
if (this.trust) {
return false; // No confirmation needed
}
```
#### Dynamic Allow-listing
The system maintains internal allow-lists for:
- **Server-level:** `serverName` → All tools from this server are trusted
- **Tool-level:** `serverName.toolName` → This specific tool is trusted
#### User Choice Handling
When confirmation is required, users can choose:
- **Proceed once:** Execute this time only
- **Always allow this tool:** Add to tool-level allow-list
- **Always allow this server:** Add to server-level allow-list
- **Cancel:** Abort execution
### 3. Execution
Upon confirmation (or trust bypass):
1. **Parameter preparation:** Arguments are validated against the tool's schema
2. **MCP call:** The underlying `CallableTool` invokes the server with:
```typescript
const functionCalls = [
{
name: this.serverToolName, // Original server tool name
args: params,
},
];
```
3. **Response processing:** Results are formatted for both LLM context and user display
### 4. Response Handling
The execution result contains:
- **`llmContent`:** Raw response parts for the language model's context
- **`returnDisplay`:** Formatted output for user display (often JSON in markdown code blocks)
## How to interact with your MCP server
### Using the `/mcp` Command
The `/mcp` command provides comprehensive information about your MCP server setup:
```bash
/mcp
```
This displays:
- **Server list:** All configured MCP servers
- **Connection status:** `CONNECTED`, `CONNECTING`, or `DISCONNECTED`
- **Server details:** Configuration summary (excluding sensitive data)
- **Available tools:** List of tools from each server with descriptions
- **Discovery state:** Overall discovery process status
### Example `/mcp` Output
```
MCP Servers Status:
📡 pythonTools (CONNECTED)
Command: python -m my_mcp_server --port 8080
Working Directory: ./mcp-servers/python
Timeout: 15000ms
Tools: calculate_sum, file_analyzer, data_processor
🔌 nodeServer (DISCONNECTED)
Command: node dist/server.js --verbose
Error: Connection refused
🐳 dockerizedServer (CONNECTED)
Command: docker run -i --rm -e API_KEY my-mcp-server:latest
Tools: docker__deploy, docker__status
Discovery State: COMPLETED
```
### Tool Usage
Once discovered, MCP tools are available to the Qwen model like built-in tools. The model will automatically:
1. **Select appropriate tools** based on your requests
2. **Present confirmation dialogs** (unless the server is trusted)
3. **Execute tools** with proper parameters
4. **Display results** in a user-friendly format
## Status Monitoring and Troubleshooting
### Connection States
The MCP integration tracks several states:
#### Server Status (`MCPServerStatus`)
- **`DISCONNECTED`:** Server is not connected or has errors
- **`CONNECTING`:** Connection attempt in progress
- **`CONNECTED`:** Server is connected and ready
#### Discovery State (`MCPDiscoveryState`)
- **`NOT_STARTED`:** Discovery hasn't begun
- **`IN_PROGRESS`:** Currently discovering servers
- **`COMPLETED`:** Discovery finished (with or without errors)
### Common Issues and Solutions
#### Server Won't Connect
**Symptoms:** Server shows `DISCONNECTED` status
**Troubleshooting:**
1. **Check configuration:** Verify `command`, `args`, and `cwd` are correct
2. **Test manually:** Run the server command directly to ensure it works
3. **Check dependencies:** Ensure all required packages are installed
4. **Review logs:** Look for error messages in the CLI output
5. **Verify permissions:** Ensure the CLI can execute the server command
#### No Tools Discovered
**Symptoms:** Server connects but no tools are available
**Troubleshooting:**
1. **Verify tool registration:** Ensure your server actually registers tools
2. **Check MCP protocol:** Confirm your server implements the MCP tool listing correctly
3. **Review server logs:** Check stderr output for server-side errors
4. **Test tool listing:** Manually test your server's tool discovery endpoint
#### Tools Not Executing
**Symptoms:** Tools are discovered but fail during execution
**Troubleshooting:**
1. **Parameter validation:** Ensure your tool accepts the expected parameters
2. **Schema compatibility:** Verify your input schemas are valid JSON Schema
3. **Error handling:** Check if your tool is throwing unhandled exceptions
4. **Timeout issues:** Consider increasing the `timeout` setting
#### Sandbox Compatibility
**Symptoms:** MCP servers fail when sandboxing is enabled
**Solutions:**
1. **Docker-based servers:** Use Docker containers that include all dependencies
2. **Path accessibility:** Ensure server executables are available in the sandbox
3. **Network access:** Configure sandbox to allow necessary network connections
4. **Environment variables:** Verify required environment variables are passed through
### Debugging Tips
1. **Enable debug mode:** Run the CLI with `--debug` for verbose output
2. **Check stderr:** MCP server stderr is captured and logged (INFO messages filtered)
3. **Test isolation:** Test your MCP server independently before integrating
4. **Incremental setup:** Start with simple tools before adding complex functionality
5. **Use `/mcp` frequently:** Monitor server status during development
## Important Notes
### Security Considerations
- **Trust settings:** The `trust` option bypasses all confirmation dialogs. Use cautiously and only for servers you completely control
- **Access tokens:** Be security-aware when configuring environment variables containing API keys or tokens
- **Sandbox compatibility:** When using sandboxing, ensure MCP servers are available within the sandbox environment
- **Private data:** Using broadly scoped personal access tokens can lead to information leakage between repositories
### Performance and Resource Management
- **Connection persistence:** The CLI maintains persistent connections to servers that successfully register tools
- **Automatic cleanup:** Connections to servers providing no tools are automatically closed
- **Timeout management:** Configure appropriate timeouts based on your server's response characteristics
- **Resource monitoring:** MCP servers run as separate processes and consume system resources
### Schema Compatibility
- **Schema compliance mode:** By default (`schemaCompliance: "auto"`), tool schemas are passed through as-is. Set `"model": { "generationConfig": { "schemaCompliance": "openapi_30" } }` in your `settings.json` to convert models to Strict OpenAPI 3.0 format.
- **OpenAPI 3.0 transformations:** When `openapi_30` mode is enabled, the system handles:
- Nullable types: `["string", "null"]` -> `type: "string", nullable: true`
- Const values: `const: "foo"` -> `enum: ["foo"]`
- Exclusive limits: numeric `exclusiveMinimum` -> boolean form with `minimum`
- Keyword removal: `$schema`, `$id`, `dependencies`, `patternProperties`
- **Name sanitization:** Tool names are automatically sanitized to meet API requirements
- **Conflict resolution:** Tool name conflicts between servers are resolved through automatic prefixing
This comprehensive integration makes MCP servers a powerful way to extend the CLI's capabilities while maintaining security, reliability, and ease of use.
## Returning Rich Content from Tools
MCP tools are not limited to returning simple text. You can return rich, multi-part content, including text, images, audio, and other binary data in a single tool response. This allows you to build powerful tools that can provide diverse information to the model in a single turn.
All data returned from the tool is processed and sent to the model as context for its next generation, enabling it to reason about or summarize the provided information.
### How It Works
To return rich content, your tool's response must adhere to the MCP specification for a [`CallToolResult`](https://modelcontextprotocol.io/specification/2025-06-18/server/tools#tool-result). The `content` field of the result should be an array of `ContentBlock` objects. The CLI will correctly process this array, separating text from binary data and packaging it for the model.
You can mix and match different content block types in the `content` array. The supported block types include:
- `text`
- `image`
- `audio`
- `resource` (embedded content)
- `resource_link`
### Example: Returning Text and an Image
Here is an example of a valid JSON response from an MCP tool that returns both a text description and an image:
```json
{
"content": [
{
"type": "text",
"text": "Here is the logo you requested."
},
{
"type": "image",
"data": "BASE64_ENCODED_IMAGE_DATA_HERE",
"mimeType": "image/png"
},
{
"type": "text",
"text": "The logo was created in 2025."
}
]
}
```
When Qwen Code receives this response, it will:
1. Extract all the text and combine it into a single `functionResponse` part for the model.
2. Present the image data as a separate `inlineData` part.
3. Provide a clean, user-friendly summary in the CLI, indicating that both text and an image were received.
This enables you to build sophisticated tools that can provide rich, multi-modal context to the Qwen model.
## MCP Prompts as Slash Commands
In addition to tools, MCP servers can expose predefined prompts that can be executed as slash commands within Qwen Code. This allows you to create shortcuts for common or complex queries that can be easily invoked by name.
### Defining Prompts on the Server
Here's a small example of a stdio MCP server that defines prompts:
```ts
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';
const server = new McpServer({
name: 'prompt-server',
version: '1.0.0',
});
server.registerPrompt(
'poem-writer',
{
title: 'Poem Writer',
description: 'Write a nice haiku',
argsSchema: { title: z.string(), mood: z.string().optional() },
},
({ title, mood }) => ({
messages: [
{
role: 'user',
content: {
type: 'text',
text: `Write a haiku${mood ? ` with the mood ${mood}` : ''} called ${title}. Note that a haiku is 5 syllables followed by 7 syllables followed by 5 syllables `,
},
},
],
}),
);
const transport = new StdioServerTransport();
await server.connect(transport);
```
This can be included in `settings.json` under `mcpServers` with:
```json
{
"mcpServers": {
"nodeServer": {
"command": "node",
"args": ["filename.ts"]
}
}
}
```
### Invoking Prompts
Once a prompt is discovered, you can invoke it using its name as a slash command. The CLI will automatically handle parsing arguments.
```bash
/poem-writer --title="Qwen Code" --mood="reverent"
```
or, using positional arguments:
```bash
/poem-writer "Qwen Code" reverent
```
When you run this command, the CLI executes the `prompts/get` method on the MCP server with the provided arguments. The server is responsible for substituting the arguments into the prompt template and returning the final prompt text. The CLI then sends this prompt to the model for execution. This provides a convenient way to automate and share common workflows.
## Managing MCP Servers with `qwen mcp`
While you can always configure MCP servers by manually editing your `settings.json` file, the CLI provides a convenient set of commands to manage your server configurations programmatically. These commands streamline the process of adding, listing, and removing MCP servers without needing to directly edit JSON files.
### Adding a Server (`qwen mcp add`)
The `add` command configures a new MCP server in your `settings.json`. Based on the scope (`-s, --scope`), it will be added to either the user config `~/.qwen/settings.json` or the project config `.qwen/settings.json` file.
**Command:**
```bash
qwen mcp add [options] <name> <commandOrUrl> [args...]
```
- `<name>`: A unique name for the server.
- `<commandOrUrl>`: The command to execute (for `stdio`) or the URL (for `http`/`sse`).
- `[args...]`: Optional arguments for a `stdio` command.
**Options (Flags):**
- `-s, --scope`: Configuration scope (user or project). [default: "project"]
- `-t, --transport`: Transport type (stdio, sse, http). [default: "stdio"]
- `-e, --env`: Set environment variables (e.g. -e KEY=value).
- `-H, --header`: Set HTTP headers for SSE and HTTP transports (e.g. -H "X-Api-Key: abc123" -H "Authorization: Bearer abc123").
- `--timeout`: Set connection timeout in milliseconds.
- `--trust`: Trust the server (bypass all tool call confirmation prompts).
- `--description`: Set the description for the server.
- `--include-tools`: A comma-separated list of tools to include.
- `--exclude-tools`: A comma-separated list of tools to exclude.
#### Adding an stdio server
This is the default transport for running local servers.
```bash
# Basic syntax
qwen mcp add <name> <command> [args...]
# Example: Adding a local server
qwen mcp add my-stdio-server -e API_KEY=123 /path/to/server arg1 arg2 arg3
# Example: Adding a local python server
qwen mcp add python-server python server.py --port 8080
```
#### Adding an HTTP server
This transport is for servers that use the streamable HTTP transport.
```bash
# Basic syntax
qwen mcp add --transport http <name> <url>
# Example: Adding an HTTP server
qwen mcp add --transport http http-server https://api.example.com/mcp/
# Example: Adding an HTTP server with an authentication header
qwen mcp add --transport http secure-http https://api.example.com/mcp/ --header "Authorization: Bearer abc123"
```
#### Adding an SSE server
This transport is for servers that use Server-Sent Events (SSE).
```bash
# Basic syntax
qwen mcp add --transport sse <name> <url>
# Example: Adding an SSE server
qwen mcp add --transport sse sse-server https://api.example.com/sse/
# Example: Adding an SSE server with an authentication header
qwen mcp add --transport sse secure-sse https://api.example.com/sse/ --header "Authorization: Bearer abc123"
```
### Listing Servers (`qwen mcp list`)
To view all MCP servers currently configured, use the `list` command. It displays each server's name, configuration details, and connection status.
**Command:**
```bash
qwen mcp list
```
**Example Output:**
```sh
✓ stdio-server: command: python3 server.py (stdio) - Connected
✓ http-server: https://api.example.com/mcp (http) - Connected
✗ sse-server: https://api.example.com/sse (sse) - Disconnected
```
### Removing a Server (`qwen mcp remove`)
To delete a server from your configuration, use the `remove` command with the server's name.
**Command:**
```bash
qwen mcp remove <name>
```
**Example:**
```bash
qwen mcp remove my-server
```
This will find and delete the "my-server" entry from the `mcpServers` object in the appropriate `settings.json` file based on the scope (`-s, --scope`).

View File

@@ -0,0 +1,44 @@
# Memory Tool (`save_memory`)
This document describes the `save_memory` tool for Qwen Code.
## Description
Use `save_memory` to save and recall information across your Qwen Code sessions. With `save_memory`, you can direct the CLI to remember key details across sessions, providing personalized and directed assistance.
### Arguments
`save_memory` takes one argument:
- `fact` (string, required): The specific fact or piece of information to remember. This should be a clear, self-contained statement written in natural language.
## How to use `save_memory` with Qwen Code
The tool appends the provided `fact` to your context file in the user's home directory (`~/.qwen/QWEN.md` by default). This filename can be configured via `contextFileName`.
Once added, the facts are stored under a `## Qwen Added Memories` section. This file is loaded as context in subsequent sessions, allowing the CLI to recall the saved information.
Usage:
```
save_memory(fact="Your fact here.")
```
### `save_memory` examples
Remember a user preference:
```
save_memory(fact="My preferred programming language is Python.")
```
Store a project-specific detail:
```
save_memory(fact="The project I'm currently working on is called 'qwen-code'.")
```
## Important notes
- **General usage:** This tool should be used for concise, important facts. It is not intended for storing large amounts of data or conversational history.
- **Memory file:** The memory file is a plain text Markdown file, so you can view and edit it manually if needed.

View File

@@ -0,0 +1,69 @@
# Multi File Read Tool (`read_many_files`)
This document describes the `read_many_files` tool for Qwen Code.
## Description
Use `read_many_files` to read content from multiple files specified by paths or glob patterns. The behavior of this tool depends on the provided files:
- For text files, this tool concatenates their content into a single string.
- For image (e.g., PNG, JPEG), PDF, audio (MP3, WAV), and video (MP4, MOV) files, it reads and returns them as base64-encoded data, provided they are explicitly requested by name or extension.
`read_many_files` can be used to perform tasks such as getting an overview of a codebase, finding where specific functionality is implemented, reviewing documentation, or gathering context from multiple configuration files.
**Note:** `read_many_files` looks for files following the provided paths or glob patterns. A directory path such as `"/docs"` will return an empty result; the tool requires a pattern such as `"/docs/*"` or `"/docs/*.md"` to identify the relevant files.
### Arguments
`read_many_files` takes the following arguments:
- `paths` (list[string], required): An array of glob patterns or paths relative to the tool's target directory (e.g., `["src/**/*.ts"]`, `["README.md", "docs/*", "assets/logo.png"]`).
- `exclude` (list[string], optional): Glob patterns for files/directories to exclude (e.g., `["**/*.log", "temp/"]`). These are added to default excludes if `useDefaultExcludes` is true.
- `include` (list[string], optional): Additional glob patterns to include. These are merged with `paths` (e.g., `["*.test.ts"]` to specifically add test files if they were broadly excluded, or `["images/*.jpg"]` to include specific image types).
- `recursive` (boolean, optional): Whether to search recursively. This is primarily controlled by `**` in glob patterns. Defaults to `true`.
- `useDefaultExcludes` (boolean, optional): Whether to apply a list of default exclusion patterns (e.g., `node_modules`, `.git`, non image/pdf binary files). Defaults to `true`.
- `respect_git_ignore` (boolean, optional): Whether to respect .gitignore patterns when finding files. Defaults to true.
## How to use `read_many_files` with Qwen Code
`read_many_files` searches for files matching the provided `paths` and `include` patterns, while respecting `exclude` patterns and default excludes (if enabled).
- For text files: it reads the content of each matched file (attempting to skip binary files not explicitly requested as image/PDF) and concatenates it into a single string, with a separator `--- {filePath} ---` between the content of each file. Uses UTF-8 encoding by default.
- The tool inserts a `--- End of content ---` after the last file.
- For image and PDF files: if explicitly requested by name or extension (e.g., `paths: ["logo.png"]` or `include: ["*.pdf"]`), the tool reads the file and returns its content as a base64 encoded string.
- The tool attempts to detect and skip other binary files (those not matching common image/PDF types or not explicitly requested) by checking for null bytes in their initial content.
Usage:
```
read_many_files(paths=["Your files or paths here."], include=["Additional files to include."], exclude=["Files to exclude."], recursive=False, useDefaultExcludes=false, respect_git_ignore=true)
```
## `read_many_files` examples
Read all TypeScript files in the `src` directory:
```
read_many_files(paths=["src/**/*.ts"])
```
Read the main README, all Markdown files in the `docs` directory, and a specific logo image, excluding a specific file:
```
read_many_files(paths=["README.md", "docs/**/*.md", "assets/logo.png"], exclude=["docs/OLD_README.md"])
```
Read all JavaScript files but explicitly include test files and all JPEGs in an `images` folder:
```
read_many_files(paths=["**/*.js"], include=["**/*.test.js", "images/**/*.jpg"], useDefaultExcludes=False)
```
## Important notes
- **Binary file handling:**
- **Image/PDF/Audio/Video files:** The tool can read common image types (PNG, JPEG, etc.), PDF, audio (mp3, wav), and video (mp4, mov) files, returning them as base64 encoded data. These files _must_ be explicitly targeted by the `paths` or `include` patterns (e.g., by specifying the exact filename like `video.mp4` or a pattern like `*.mov`).
- **Other binary files:** The tool attempts to detect and skip other types of binary files by examining their initial content for null bytes. The tool excludes these files from its output.
- **Performance:** Reading a very large number of files or very large individual files can be resource-intensive.
- **Path specificity:** Ensure paths and glob patterns are correctly specified relative to the tool's target directory. For image/PDF files, ensure the patterns are specific enough to include them.
- **Default excludes:** Be aware of the default exclusion patterns (like `node_modules`, `.git`) and use `useDefaultExcludes=False` if you need to override them, but do so cautiously.

View File

@@ -0,0 +1,259 @@
# Shell Tool (`run_shell_command`)
This document describes the `run_shell_command` tool for Qwen Code.
## Description
Use `run_shell_command` to interact with the underlying system, run scripts, or perform command-line operations. `run_shell_command` executes a given shell command, including interactive commands that require user input (e.g., `vim`, `git rebase -i`) if the `tools.shell.enableInteractiveShell` setting is set to `true`.
On Windows, commands are executed with `cmd.exe /c`. On other platforms, they are executed with `bash -c`.
### Arguments
`run_shell_command` takes the following arguments:
- `command` (string, required): The exact shell command to execute.
- `description` (string, optional): A brief description of the command's purpose, which will be shown to the user.
- `directory` (string, optional): The directory (relative to the project root) in which to execute the command. If not provided, the command runs in the project root.
- `is_background` (boolean, required): Whether to run the command in background. This parameter is required to ensure explicit decision-making about command execution mode. Set to true for long-running processes like development servers, watchers, or daemons that should continue running without blocking further commands. Set to false for one-time commands that should complete before proceeding.
## How to use `run_shell_command` with Qwen Code
When using `run_shell_command`, the command is executed as a subprocess. You can control whether commands run in background or foreground using the `is_background` parameter, or by explicitly adding `&` to commands. The tool returns detailed information about the execution, including:
### Required Background Parameter
The `is_background` parameter is **required** for all command executions. This design ensures that the LLM (and users) must explicitly decide whether each command should run in the background or foreground, promoting intentional and predictable command execution behavior. By making this parameter mandatory, we avoid unintended fallback to foreground execution, which could block subsequent operations when dealing with long-running processes.
### Background vs Foreground Execution
The tool intelligently handles background and foreground execution based on your explicit choice:
**Use background execution (`is_background: true`) for:**
- Long-running development servers: `npm run start`, `npm run dev`, `yarn dev`
- Build watchers: `npm run watch`, `webpack --watch`
- Database servers: `mongod`, `mysql`, `redis-server`
- Web servers: `python -m http.server`, `php -S localhost:8000`
- Any command expected to run indefinitely until manually stopped
**Use foreground execution (`is_background: false`) for:**
- One-time commands: `ls`, `cat`, `grep`
- Build commands: `npm run build`, `make`
- Installation commands: `npm install`, `pip install`
- Git operations: `git commit`, `git push`
- Test runs: `npm test`, `pytest`
### Execution Information
The tool returns detailed information about the execution, including:
- `Command`: The command that was executed.
- `Directory`: The directory where the command was run.
- `Stdout`: Output from the standard output stream.
- `Stderr`: Output from the standard error stream.
- `Error`: Any error message reported by the subprocess.
- `Exit Code`: The exit code of the command.
- `Signal`: The signal number if the command was terminated by a signal.
- `Background PIDs`: A list of PIDs for any background processes started.
Usage:
```bash
run_shell_command(command="Your commands.", description="Your description of the command.", directory="Your execution directory.", is_background=false)
```
**Note:** The `is_background` parameter is required and must be explicitly specified for every command execution.
## `run_shell_command` examples
List files in the current directory:
```bash
run_shell_command(command="ls -la", is_background=false)
```
Run a script in a specific directory:
```bash
run_shell_command(command="./my_script.sh", directory="scripts", description="Run my custom script", is_background=false)
```
Start a background development server (recommended approach):
```bash
run_shell_command(command="npm run dev", description="Start development server in background", is_background=true)
```
Start a background server (alternative with explicit &):
```bash
run_shell_command(command="npm run dev &", description="Start development server in background", is_background=false)
```
Run a build command in foreground:
```bash
run_shell_command(command="npm run build", description="Build the project", is_background=false)
```
Start multiple background services:
```bash
run_shell_command(command="docker-compose up", description="Start all services", is_background=true)
```
## Configuration
You can configure the behavior of the `run_shell_command` tool by modifying your `settings.json` file or by using the `/settings` command in the Qwen Code.
### Enabling Interactive Commands
To enable interactive commands, you need to set the `tools.shell.enableInteractiveShell` setting to `true`. This will use `node-pty` for shell command execution, which allows for interactive sessions. If `node-pty` is not available, it will fall back to the `child_process` implementation, which does not support interactive commands.
**Example `settings.json`:**
```json
{
"tools": {
"shell": {
"enableInteractiveShell": true
}
}
}
```
### Showing Color in Output
To show color in the shell output, you need to set the `tools.shell.showColor` setting to `true`. **Note: This setting only applies when `tools.shell.enableInteractiveShell` is enabled.**
**Example `settings.json`:**
```json
{
"tools": {
"shell": {
"showColor": true
}
}
}
```
### Setting the Pager
You can set a custom pager for the shell output by setting the `tools.shell.pager` setting. The default pager is `cat`. **Note: This setting only applies when `tools.shell.enableInteractiveShell` is enabled.**
**Example `settings.json`:**
```json
{
"tools": {
"shell": {
"pager": "less"
}
}
}
```
## Interactive Commands
The `run_shell_command` tool now supports interactive commands by integrating a pseudo-terminal (pty). This allows you to run commands that require real-time user input, such as text editors (`vim`, `nano`), terminal-based UIs (`htop`), and interactive version control operations (`git rebase -i`).
When an interactive command is running, you can send input to it from the Qwen Code. To focus on the interactive shell, press `ctrl+f`. The terminal output, including complex TUIs, will be rendered correctly.
## Important notes
- **Security:** Be cautious when executing commands, especially those constructed from user input, to prevent security vulnerabilities.
- **Error handling:** Check the `Stderr`, `Error`, and `Exit Code` fields to determine if a command executed successfully.
- **Background processes:** When `is_background=true` or when a command contains `&`, the tool will return immediately and the process will continue to run in the background. The `Background PIDs` field will contain the process ID of the background process.
- **Background execution choices:** The `is_background` parameter is required and provides explicit control over execution mode. You can also add `&` to the command for manual background execution, but the `is_background` parameter must still be specified. The parameter provides clearer intent and automatically handles the background execution setup.
- **Command descriptions:** When using `is_background=true`, the command description will include a `[background]` indicator to clearly show the execution mode.
## Environment Variables
When `run_shell_command` executes a command, it sets the `QWEN_CODE=1` environment variable in the subprocess's environment. This allows scripts or tools to detect if they are being run from within the CLI.
## Command Restrictions
You can restrict the commands that can be executed by the `run_shell_command` tool by using the `tools.core` and `tools.exclude` settings in your configuration file.
- `tools.core`: To restrict `run_shell_command` to a specific set of commands, add entries to the `core` list under the `tools` category in the format `run_shell_command(<command>)`. For example, `"tools": {"core": ["run_shell_command(git)"]}` will only allow `git` commands. Including the generic `run_shell_command` acts as a wildcard, allowing any command not explicitly blocked.
- `tools.exclude`: To block specific commands, add entries to the `exclude` list under the `tools` category in the format `run_shell_command(<command>)`. For example, `"tools": {"exclude": ["run_shell_command(rm)"]}` will block `rm` commands.
The validation logic is designed to be secure and flexible:
1. **Command Chaining Disabled**: The tool automatically splits commands chained with `&&`, `||`, or `;` and validates each part separately. If any part of the chain is disallowed, the entire command is blocked.
2. **Prefix Matching**: The tool uses prefix matching. For example, if you allow `git`, you can run `git status` or `git log`.
3. **Blocklist Precedence**: The `tools.exclude` list is always checked first. If a command matches a blocked prefix, it will be denied, even if it also matches an allowed prefix in `tools.core`.
### Command Restriction Examples
**Allow only specific command prefixes**
To allow only `git` and `npm` commands, and block all others:
```json
{
"tools": {
"core": ["run_shell_command(git)", "run_shell_command(npm)"]
}
}
```
- `git status`: Allowed
- `npm install`: Allowed
- `ls -l`: Blocked
**Block specific command prefixes**
To block `rm` and allow all other commands:
```json
{
"tools": {
"core": ["run_shell_command"],
"exclude": ["run_shell_command(rm)"]
}
}
```
- `rm -rf /`: Blocked
- `git status`: Allowed
- `npm install`: Allowed
**Blocklist takes precedence**
If a command prefix is in both `tools.core` and `tools.exclude`, it will be blocked.
```json
{
"tools": {
"core": ["run_shell_command(git)"],
"exclude": ["run_shell_command(git push)"]
}
}
```
- `git push origin main`: Blocked
- `git status`: Allowed
**Block all shell commands**
To block all shell commands, add the `run_shell_command` wildcard to `tools.exclude`:
```json
{
"tools": {
"exclude": ["run_shell_command"]
}
}
```
- `ls -l`: Blocked
- `any other command`: Blocked
## Security Note for `excludeTools`
Command-specific restrictions in `excludeTools` for `run_shell_command` are based on simple string matching and can be easily bypassed. This feature is **not a security mechanism** and should not be relied upon to safely execute untrusted code. It is recommended to use `coreTools` to explicitly select commands
that can be executed.

View File

@@ -0,0 +1,145 @@
# Task Tool (`task`)
This document describes the `task` tool for Qwen Code.
## Description
Use `task` to launch a specialized subagent to handle complex, multi-step tasks autonomously. The Task tool delegates work to specialized agents that can work independently with access to their own set of tools, allowing for parallel task execution and specialized expertise.
### Arguments
`task` takes the following arguments:
- `description` (string, required): A short (3-5 word) description of the task for user visibility and tracking purposes.
- `prompt` (string, required): The detailed task prompt for the subagent to execute. Should contain comprehensive instructions for autonomous execution.
- `subagent_type` (string, required): The type of specialized agent to use for this task. Must match one of the available configured subagents.
## How to use `task` with Qwen Code
The Task tool dynamically loads available subagents from your configuration and delegates tasks to them. Each subagent runs independently and can use its own set of tools, allowing for specialized expertise and parallel execution.
When you use the Task tool, the subagent will:
1. Receive the task prompt with full autonomy
2. Execute the task using its available tools
3. Return a final result message
4. Terminate (subagents are stateless and single-use)
Usage:
```
task(description="Brief task description", prompt="Detailed task instructions for the subagent", subagent_type="agent_name")
```
## Available Subagents
The available subagents depend on your configuration. Common subagent types might include:
- **general-purpose**: For complex multi-step tasks requiring various tools
- **code-reviewer**: For reviewing and analyzing code quality
- **test-runner**: For running tests and analyzing results
- **documentation-writer**: For creating and updating documentation
You can view available subagents by using the `/agents` command in Qwen Code.
## Task Tool Features
### Real-time Progress Updates
The Task tool provides live updates showing:
- Subagent execution status
- Individual tool calls being made by the subagent
- Tool call results and any errors
- Overall task progress and completion status
### Parallel Execution
You can launch multiple subagents concurrently by calling the Task tool multiple times in a single message, allowing for parallel task execution and improved efficiency.
### Specialized Expertise
Each subagent can be configured with:
- Specific tool access permissions
- Specialized system prompts and instructions
- Custom model configurations
- Domain-specific knowledge and capabilities
## `task` examples
### Delegating to a general-purpose agent
```
task(
description="Code refactoring",
prompt="Please refactor the authentication module in src/auth/ to use modern async/await patterns instead of callbacks. Ensure all tests still pass and update any related documentation.",
subagent_type="general-purpose"
)
```
### Running parallel tasks
```
# Launch code review and test execution in parallel
task(
description="Code review",
prompt="Review the recent changes in the user management module for code quality, security issues, and best practices compliance.",
subagent_type="code-reviewer"
)
task(
description="Run tests",
prompt="Execute the full test suite and analyze any failures. Provide a summary of test coverage and recommendations for improvement.",
subagent_type="test-runner"
)
```
### Documentation generation
```
task(
description="Update docs",
prompt="Generate comprehensive API documentation for the newly implemented REST endpoints in the orders module. Include request/response examples and error codes.",
subagent_type="documentation-writer"
)
```
## When to Use the Task Tool
Use the Task tool when:
1. **Complex multi-step tasks** - Tasks requiring multiple operations that can be handled autonomously
2. **Specialized expertise** - Tasks that benefit from domain-specific knowledge or tools
3. **Parallel execution** - When you have multiple independent tasks that can run simultaneously
4. **Delegation needs** - When you want to hand off a complete task rather than micromanaging steps
5. **Resource-intensive operations** - Tasks that might take significant time or computational resources
## When NOT to Use the Task Tool
Don't use the Task tool for:
- **Simple, single-step operations** - Use direct tools like Read, Edit, etc.
- **Interactive tasks** - Tasks requiring back-and-forth communication
- **Specific file reads** - Use Read tool directly for better performance
- **Simple searches** - Use Grep or Glob tools directly
## Important Notes
- **Stateless execution**: Each subagent invocation is independent with no memory of previous executions
- **Single communication**: Subagents provide one final result message - no ongoing communication
- **Comprehensive prompts**: Your prompt should contain all necessary context and instructions for autonomous execution
- **Tool access**: Subagents only have access to tools configured in their specific configuration
- **Parallel capability**: Multiple subagents can run simultaneously for improved efficiency
- **Configuration dependent**: Available subagent types depend on your system configuration
## Configuration
Subagents are configured through Qwen Code's agent configuration system. Use the `/agents` command to:
- View available subagents
- Create new subagent configurations
- Modify existing subagent settings
- Set tool permissions and capabilities
For more information on configuring subagents, refer to the subagents documentation.

View File

@@ -0,0 +1,63 @@
# Todo Write Tool (`todo_write`)
This document describes the `todo_write` tool for Qwen Code.
## Description
Use `todo_write` to create and manage a structured task list for your current coding session. This tool helps the AI assistant track progress and organize complex tasks, providing you with visibility into what work is being performed.
### Arguments
`todo_write` takes one argument:
- `todos` (array, required): An array of todo items, where each item contains:
- `content` (string, required): The description of the task.
- `status` (string, required): The current status (`pending`, `in_progress`, or `completed`).
- `activeForm` (string, required): The present continuous form describing what is being done (e.g., "Running tests", "Building the project").
## How to use `todo_write` with Qwen Code
The AI assistant will automatically use this tool when working on complex, multi-step tasks. You don't need to explicitly request it, but you can ask the assistant to create a todo list if you want to see the planned approach for your request.
The tool stores todo lists in your home directory (`~/.qwen/todos/`) with session-specific files, so each coding session maintains its own task list.
## When the AI uses this tool
The assistant uses `todo_write` for:
- Complex tasks requiring multiple steps
- Feature implementations with several components
- Refactoring operations across multiple files
- Any work involving 3 or more distinct actions
The assistant will not use this tool for simple, single-step tasks or purely informational requests.
### `todo_write` examples
Creating a feature implementation plan:
```
todo_write(todos=[
{
"content": "Create user preferences model",
"status": "pending",
"activeForm": "Creating user preferences model"
},
{
"content": "Add API endpoints for preferences",
"status": "pending",
"activeForm": "Adding API endpoints for preferences"
},
{
"content": "Implement frontend components",
"status": "pending",
"activeForm": "Implementing frontend components"
}
])
```
## Important notes
- **Automatic usage:** The AI assistant manages todo lists automatically during complex tasks.
- **Progress visibility:** You'll see todo lists updated in real-time as work progresses.
- **Session isolation:** Each coding session has its own todo list that doesn't interfere with others.

View File

@@ -0,0 +1,54 @@
# Web Fetch Tool (`web_fetch`)
This document describes the `web_fetch` tool for Qwen Code.
## Description
Use `web_fetch` to fetch content from a specified URL and process it using an AI model. The tool takes a URL and a prompt as input, fetches the URL content, converts HTML to markdown, and processes the content with the prompt using a small, fast model.
### Arguments
`web_fetch` takes two arguments:
- `url` (string, required): The URL to fetch content from. Must be a fully-formed valid URL starting with `http://` or `https://`.
- `prompt` (string, required): The prompt describing what information you want to extract from the page content.
## How to use `web_fetch` with Qwen Code
To use `web_fetch` with Qwen Code, provide a URL and a prompt describing what you want to extract from that URL. The tool will ask for confirmation before fetching the URL. Once confirmed, the tool will fetch the content directly and process it using an AI model.
The tool automatically converts HTML to text, handles GitHub blob URLs (converting them to raw URLs), and upgrades HTTP URLs to HTTPS for security.
Usage:
```
web_fetch(url="https://example.com", prompt="Summarize the main points of this article")
```
## `web_fetch` examples
Summarize a single article:
```
web_fetch(url="https://example.com/news/latest", prompt="Can you summarize the main points of this article?")
```
Extract specific information:
```
web_fetch(url="https://arxiv.org/abs/2401.0001", prompt="What are the key findings and methodology described in this paper?")
```
Analyze GitHub documentation:
```
web_fetch(url="https://github.com/QwenLM/Qwen/blob/main/README.md", prompt="What are the installation steps and main features?")
```
## Important notes
- **Single URL processing:** `web_fetch` processes one URL at a time. To analyze multiple URLs, make separate calls to the tool.
- **URL format:** The tool automatically upgrades HTTP URLs to HTTPS and converts GitHub blob URLs to raw format for better content access.
- **Content processing:** The tool fetches content directly and processes it using an AI model, converting HTML to readable text format.
- **Output quality:** The quality of the output will depend on the clarity of the instructions in the prompt.
- **MCP tools:** If an MCP-provided web fetch tool is available (starting with "mcp\_\_"), prefer using that tool as it may have fewer restrictions.

View File

@@ -0,0 +1,186 @@
# Web Search Tool (`web_search`)
This document describes the `web_search` tool for performing web searches using multiple providers.
## Description
Use `web_search` to perform a web search and get information from the internet. The tool supports multiple search providers and returns a concise answer with source citations when available.
### Supported Providers
1. **DashScope** (Official, Free) - Automatically available for Qwen OAuth users (200 requests/minute, 2000 requests/day)
2. **Tavily** - High-quality search API with built-in answer generation
3. **Google Custom Search** - Google's Custom Search JSON API
### Arguments
`web_search` takes two arguments:
- `query` (string, required): The search query
- `provider` (string, optional): Specific provider to use ("dashscope", "tavily", "google")
- If not specified, uses the default provider from configuration
## Configuration
### Method 1: Settings File (Recommended)
Add to your `settings.json`:
```json
{
"webSearch": {
"provider": [
{ "type": "dashscope" },
{ "type": "tavily", "apiKey": "tvly-xxxxx" },
{
"type": "google",
"apiKey": "your-google-api-key",
"searchEngineId": "your-search-engine-id"
}
],
"default": "dashscope"
}
}
```
**Notes:**
- DashScope doesn't require an API key (official, free service)
- **Qwen OAuth users:** DashScope is automatically added to your provider list, even if not explicitly configured
- Configure additional providers (Tavily, Google) if you want to use them alongside DashScope
- Set `default` to specify which provider to use by default (if not set, priority order: Tavily > Google > DashScope)
### Method 2: Environment Variables
Set environment variables in your shell or `.env` file:
```bash
# Tavily
export TAVILY_API_KEY="tvly-xxxxx"
# Google
export GOOGLE_API_KEY="your-api-key"
export GOOGLE_SEARCH_ENGINE_ID="your-engine-id"
```
### Method 3: Command Line Arguments
Pass API keys when running Qwen Code:
```bash
# Tavily
qwen --tavily-api-key tvly-xxxxx
# Google
qwen --google-api-key your-key --google-search-engine-id your-id
# Specify default provider
qwen --web-search-default tavily
```
### Backward Compatibility (Deprecated)
⚠️ **DEPRECATED:** The legacy `tavilyApiKey` configuration is still supported for backward compatibility but is deprecated:
```json
{
"advanced": {
"tavilyApiKey": "tvly-xxxxx" // ⚠️ Deprecated
}
}
```
**Important:** This configuration is deprecated and will be removed in a future version. Please migrate to the new `webSearch` configuration format shown above. The old configuration will automatically configure Tavily as a provider, but we strongly recommend updating your configuration.
## Disabling Web Search
If you want to disable the web search functionality, you can exclude the `web_search` tool in your `settings.json`:
```json
{
"tools": {
"exclude": ["web_search"]
}
}
```
**Note:** This setting requires a restart of Qwen Code to take effect. Once disabled, the `web_search` tool will not be available to the model, even if web search providers are configured.
## Usage Examples
### Basic search (using default provider)
```
web_search(query="latest advancements in AI")
```
### Search with specific provider
```
web_search(query="latest advancements in AI", provider="tavily")
```
### Real-world examples
```
web_search(query="weather in San Francisco today")
web_search(query="latest Node.js LTS version", provider="google")
web_search(query="best practices for React 19", provider="dashscope")
```
## Provider Details
### DashScope (Official)
- **Cost:** Free
- **Authentication:** Automatically available when using Qwen OAuth authentication
- **Configuration:** No API key required, automatically added to provider list for Qwen OAuth users
- **Quota:** 200 requests/minute, 2000 requests/day
- **Best for:** General queries, always available as fallback for Qwen OAuth users
- **Auto-registration:** If you're using Qwen OAuth, DashScope is automatically added to your provider list even if you don't configure it explicitly
### Tavily
- **Cost:** Requires API key (paid service with free tier)
- **Sign up:** https://tavily.com
- **Features:** High-quality results with AI-generated answers
- **Best for:** Research, comprehensive answers with citations
### Google Custom Search
- **Cost:** Free tier available (100 queries/day)
- **Setup:**
1. Enable Custom Search API in Google Cloud Console
2. Create a Custom Search Engine at https://programmablesearchengine.google.com
- **Features:** Google's search quality
- **Best for:** Specific, factual queries
## Important Notes
- **Response format:** Returns a concise answer with numbered source citations
- **Citations:** Source links are appended as a numbered list: [1], [2], etc.
- **Multiple providers:** If one provider fails, manually specify another using the `provider` parameter
- **DashScope availability:** Automatically available for Qwen OAuth users, no configuration needed
- **Default provider selection:** The system automatically selects a default provider based on availability:
1. Your explicit `default` configuration (highest priority)
2. CLI argument `--web-search-default`
3. First available provider by priority: Tavily > Google > DashScope
## Troubleshooting
**Tool not available?**
- **For Qwen OAuth users:** The tool is automatically registered with DashScope provider, no configuration needed
- **For other authentication types:** Ensure at least one provider (Tavily or Google) is configured
- For Tavily/Google: Verify your API keys are correct
**Provider-specific errors?**
- Use the `provider` parameter to try a different search provider
- Check your API quotas and rate limits
- Verify API keys are properly set in configuration
**Need help?**
- Check your configuration: Run `qwen` and use the settings dialog
- View your current settings in `~/.qwen-code/settings.json` (macOS/Linux) or `%USERPROFILE%\.qwen-code\settings.json` (Windows)