Compare commits

..

4 Commits

Author SHA1 Message Date
github-actions[bot]
56e0d2fbbf chore(release): v0.6.2-preview.0 2026-01-12 04:06:18 +00:00
tanzhenxin
5c884fd395 fix(core): handle missing delta in OpenAI stream chunks
Some OpenAI-compatible providers occasionally emit chat.completion.chunk choices
without a delta object. Guard optional reasoning_content access and add a
regression test to ensure chunk conversion does not throw.
2026-01-12 11:49:00 +08:00
xuewenjie
0073c77267 fix(shell): prevent console window flash on Windows for foreground tasks 2026-01-12 11:48:28 +08:00
xuewenjie
418aeb069d test: update shellExecutionService test for Windows spawn config changes 2026-01-12 11:48:28 +08:00
213 changed files with 3192 additions and 21011 deletions

3
.github/CODEOWNERS vendored
View File

@@ -1,3 +0,0 @@
* @tanzhenxin @DennisYu07 @gwinthis @LaZzyMan @pomelo-nwu @Mingholy
# SDK TypeScript package changes require review from Mingholy
packages/sdk-typescript/** @Mingholy

View File

@@ -241,7 +241,7 @@ jobs:
${{ steps.vars.outputs.is_dry_run == 'false' && steps.vars.outputs.is_nightly == 'false' && steps.vars.outputs.is_preview == 'false' }}
id: 'pr'
env:
GITHUB_TOKEN: '${{ secrets.CI_BOT_PAT }}'
GITHUB_TOKEN: '${{ secrets.GITHUB_TOKEN }}'
RELEASE_BRANCH: '${{ steps.release_branch.outputs.BRANCH_NAME }}'
RELEASE_TAG: '${{ steps.version.outputs.RELEASE_TAG }}'
run: |-
@@ -258,15 +258,26 @@ jobs:
echo "PR_URL=${pr_url}" >> "${GITHUB_OUTPUT}"
- name: 'Wait for CI checks to complete'
if: |-
${{ steps.vars.outputs.is_dry_run == 'false' && steps.vars.outputs.is_nightly == 'false' && steps.vars.outputs.is_preview == 'false' }}
env:
GITHUB_TOKEN: '${{ secrets.GITHUB_TOKEN }}'
PR_URL: '${{ steps.pr.outputs.PR_URL }}'
run: |-
set -euo pipefail
echo "Waiting for CI checks to complete..."
gh pr checks "${PR_URL}" --watch --interval 30
- name: 'Enable auto-merge for release PR'
if: |-
${{ steps.vars.outputs.is_dry_run == 'false' && steps.vars.outputs.is_nightly == 'false' && steps.vars.outputs.is_preview == 'false' }}
env:
GITHUB_TOKEN: '${{ secrets.CI_BOT_PAT }}'
GITHUB_TOKEN: '${{ secrets.GITHUB_TOKEN }}'
PR_URL: '${{ steps.pr.outputs.PR_URL }}'
run: |-
set -euo pipefail
gh pr merge "${PR_URL}" --merge --auto --delete-branch
gh pr merge "${PR_URL}" --merge --auto
- name: 'Create Issue on Failure'
if: |-

View File

@@ -13,10 +13,5 @@
"[javascript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"vitest.disableWorkspaceWarning": true,
"lsp": {
"enabled": true,
"allowed": ["typescript-language-server"],
"excluded": ["gopls"]
}
"vitest.disableWorkspaceWarning": true
}

View File

@@ -25,7 +25,7 @@ Qwen Code is an open-source AI agent for the terminal, optimized for [Qwen3-Code
- **OpenAI-compatible, OAuth free tier**: use an OpenAI-compatible API, or sign in with Qwen OAuth to get 2,000 free requests/day.
- **Open-source, co-evolving**: both the framework and the Qwen3-Coder model are open-source—and they ship and evolve together.
- **Agentic workflow, feature-rich**: rich built-in tools (Skills, SubAgents, Plan Mode) for a full agentic workflow and a Claude Code-like experience.
- **Terminal-first, IDE-friendly**: built for developers who live in the command line, with optional integration for VS Code, Zed, and JetBrains IDEs.
- **Terminal-first, IDE-friendly**: built for developers who live in the command line, with optional integration for VS Code and Zed.
## Installation
@@ -137,11 +137,10 @@ Use `-p` to run Qwen Code without the interactive UI—ideal for scripts, automa
#### IDE integration
Use Qwen Code inside your editor (VS Code, Zed, and JetBrains IDEs):
Use Qwen Code inside your editor (VS Code and Zed):
- [Use in VS Code](https://qwenlm.github.io/qwen-code-docs/en/users/integration-vscode/)
- [Use in Zed](https://qwenlm.github.io/qwen-code-docs/en/users/integration-zed/)
- [Use in JetBrains IDEs](https://qwenlm.github.io/qwen-code-docs/en/users/integration-jetbrains/)
#### TypeScript SDK
@@ -201,11 +200,6 @@ If you encounter issues, check the [troubleshooting guide](https://qwenlm.github
To report a bug from within the CLI, run `/bug` and include a short title and repro steps.
## Connect with Us
- Discord: https://discord.gg/ycKBjdNd
- Dingtalk: https://qr.dingtalk.com/action/joingroup?code=v1,k1,+FX6Gf/ZDlTahTIRi8AEQhIaBlqykA0j+eBKKdhLeAE=&_dt_no_comment=1&origin=1
## Acknowledgments
This project is based on [Google Gemini CLI](https://github.com/google-gemini/gemini-cli). We acknowledge and appreciate the excellent work of the Gemini CLI team. Our main contribution focuses on parser-level adaptations to better support Qwen-Coder models.

View File

@@ -1,147 +0,0 @@
# Qwen Code CLI LSP 集成实现方案分析
## 1. 项目概述
本方案旨在将 LSPLanguage Server Protocol能力原生集成到 Qwen Code CLI 中,使 AI 代理能够利用代码导航、定义查找、引用查找等功能。LSP 将作为与 MCP 并行的一级扩展机制实现。
## 2. 技术方案对比
### 2.1 Piebald-AI/claude-code-lsps 方案
- **架构**: 客户端直接与每个 LSP 通信,通过 `.lsp.json` 配置文件声明服务器命令/参数、stdio 传输和文件扩展名路由
- **用户配置**: 低摩擦,只需放置 `.lsp.json` 配置并确保 LSP 二进制文件已安装
- **安全**: LSP 子进程以用户权限运行,无内置信任门控
- **功能覆盖**: 可以暴露完整的 LSP 表面hover、诊断、代码操作、重命名等
### 2.2 原生 LSP 客户端方案(推荐方案)
- **架构**: Qwen Code CLI 直接作为 LSP 客户端,与语言服务器建立 JSON-RPC 连接
- **用户配置**: 支持内置预设 + 用户自定义 `.lsp.json` 配置
- **安全**: 与 MCP 共享相同的安全控制(信任工作区、允许/拒绝列表、确认提示)
- **功能覆盖**: 暴露完整的 LSP 功能(流式诊断、代码操作、重命名、语义标记等)
### 2.3 cclsp + MCP 方案(备选)
- **架构**: 通过 MCP 协议调用 cclsp 作为 LSP 桥接
- **用户配置**: 需要 MCP 配置
- **安全**: 通过 MCP 安全控制
- **功能覆盖**: 依赖于 cclsp 映射的 MCP 工具
## 3. 原生 LSP 集成详细计划
### 3.1 方案选择
- **推荐方案**: 原生 LSP 客户端作为主要路径,因为它提供完整 LSP 功能、更低延迟和更好的用户体验
- **兼容层**: 保留 cclsp+MCP 作为现有 MCP 工作流的兼容桥接
- **并行架构**: LSP 和 MCP 作为独立的扩展机制共存,共享安全策略
### 3.2 实现步骤
#### 3.2.1 创建原生 LSP 服务
`packages/cli/src/services/lsp/` 目录下创建 `NativeLspService` 类,处理:
- 工作区语言检测
- 自动发现和启动语言服务器
- 与现有文档/编辑模型同步
- LSP 能力直接暴露给代理
#### 3.2.2 配置支持
- 支持内置预设配置(常见语言服务器)
- 支持用户自定义 `.lsp.json` 配置文件
- 与 MCP 配置共存,共享信任控制
#### 3.2.3 集成启动流程
-`packages/cli/src/config/config.ts` 中的 `loadCliConfig` 函数内集成
- 确保 LSP 服务与 MCP 服务共享相同的安全控制机制
- 处理沙箱预检和主运行的重复调用问题
#### 3.2.4 功能标志配置
-`packages/cli/src/config/settingsSchema.ts` 中添加新的设置项
- 提供全局开关(如 `lsp.enabled=false`)允许用户禁用 LSP 功能
- 尊重 `mcp.allowed`/`mcp.excluded` 和文件夹信任设置
#### 3.2.5 安全控制
- 与 MCP 共享相同的安全控制机制
- 在信任工作区中自动启用,在非信任工作区中提示用户
- 实现路径允许列表和进程启动确认
#### 3.2.6 错误处理与用户通知
- 检测缺失的语言服务器并提供安装命令
- 通过现有 MCP 状态 UI 显示错误信息
- 实现重试/退避机制,检测沙箱环境并抑制自动启动
### 3.3 需要确认的不确定项
1. **启动集成点**:在 `loadCliConfig` 中集成原生 LSP 服务,需确保与 MCP 服务的协调
2. **配置优先级**:如果用户已有 cclsp MCP 配置,应保持并存还是优先使用原生 LSP
3. **功能开关设计**开关应该是全局级别的LSP 和 MCP 可独立启用/禁用
4. **共享安全模型**:如何在代码中复用 MCP 的信任/安全控制逻辑
5. **语言服务器管理**:如何管理 LSP 服务器生命周期并与文档编辑模型同步
6. **依赖检测机制**:检测 LSP 服务器可用性,失败时提供降级选项
7. **测试策略**:需要测试 LSP 与 MCP 的并行运行,以及共享安全控制
### 3.4 安全考虑
- 与 MCP 共享相同的安全控制模型
- 仅在受信任工作区中启用自动 LSP 功能
- 提供用户确认机制用于启动新的 LSP 服务器
- 防止路径劫持,使用安全的路径解析
### 3.5 高级 LSP 功能支持
- **完整 LSP 功能**: 支持流式诊断、代码操作、重命名、语义高亮、工作区编辑等
- **兼容 Claude 配置**: 支持导入 Claude Code 风格的 `.lsp.json` 配置
- **性能优化**: 优化 LSP 服务器启动时间和内存使用
### 3.6 用户体验
- 提供安装提示而非自动安装
- 在统一的状态界面显示 LSP 和 MCP 服务器状态
- 提供独立开关让用户控制 LSP 和 MCP 功能
- 为只读/沙箱环境提供安全的配置处理和清晰的错误消息
## 4. 实施总结
### 4.1 已完成的工作
1. **NativeLspService 类**创建了核心服务类包含语言检测、配置合并、LSP 连接管理等功能
2. **LSP 连接工厂**:实现了基于 stdio 的 LSP 连接创建和管理
3. **语言检测机制**:实现了基于文件扩展名和项目配置文件的语言自动检测
4. **配置系统**:实现了内置预设、用户配置和 Claude 兼容配置的合并
5. **安全控制**:实现了与 MCP 共享的安全控制机制,包括信任检查、用户确认、路径安全验证
6. **CLI 集成**:在 `loadCliConfig` 函数中集成了 LSP 服务初始化点
### 4.2 关键组件
#### 4.2.1 LspConnectionFactory
- 使用 `vscode-jsonrpc``vscode-languageserver-protocol` 实现 LSP 连接
- 支持 stdio 传输方式,可以扩展支持 TCP 传输
- 提供连接创建、初始化和关闭的完整生命周期管理
#### 4.2.2 NativeLspService
- **语言检测**:扫描项目文件和配置文件来识别编程语言
- **配置合并**:按优先级合并内置预设、用户配置和兼容层配置
- **LSP 服务器管理**:启动、停止和状态管理
- **安全控制**:与 MCP 共享的信任和确认机制
#### 4.2.3 配置架构
- **内置预设**:为常见语言提供默认 LSP 服务器配置
- **用户配置**:支持 `.lsp.json` 文件格式
- **Claude 兼容**:可导入 Claude Code 的 LSP 配置
### 4.3 依赖管理
- 使用 `vscode-languageserver-protocol` 进行 LSP 协议通信
- 使用 `vscode-jsonrpc` 进行 JSON-RPC 消息传递
- 使用 `vscode-languageserver-textdocument` 管理文档版本
### 4.4 安全特性
- 工作区信任检查
- 用户确认机制(对于非信任工作区)
- 命令存在性验证
- 路径安全性检查
## 5. 总结
原生 LSP 客户端是当前最符合 Qwen Code 架构的选择,它提供了完整的 LSP 功能、更低的延迟和更好的用户体验。LSP 作为与 MCP 并行的一级扩展机制,将与 MCP 共享安全控制策略但提供更丰富的代码智能功能。cclsp+MCP 可作为兼容层保留,以支持现有的 MCP 工作流。
该实现方案将使 Qwen Code CLI 具备完整的 LSP 功能,包括代码跳转、引用查找、自动补全、代码诊断等,为 AI 代理提供更丰富的代码理解能力。

View File

@@ -10,5 +10,4 @@ export default {
'web-search': 'Web Search',
memory: 'Memory',
'mcp-server': 'MCP Servers',
sandbox: 'Sandboxing',
};

View File

@@ -1,90 +0,0 @@
## Customizing the sandbox environment (Docker/Podman)
### Currently, the project does not support the use of the BUILD_SANDBOX function after installation through the npm package
1. To build a custom sandbox, you need to access the build scripts (scripts/build_sandbox.js) in the source code repository.
2. These build scripts are not included in the packages released by npm.
3. The code contains hard-coded path checks that explicitly reject build requests from non-source code environments.
If you need extra tools inside the container (e.g., `git`, `python`, `rg`), create a custom Dockerfile, The specific operation is as follows
#### 1、Clone qwen code project first, https://github.com/QwenLM/qwen-code.git
#### 2、Make sure you perform the following operation in the source code repository directory
```bash
# 1. First, install the dependencies of the project
npm install
# 2. Build the Qwen Code project
npm run build
# 3. Verify that the dist directory has been generated
ls -la packages/cli/dist/
# 4. Create a global link in the CLI package directory
cd packages/cli
npm link
# 5. Verification link (it should now point to the source code)
which qwen
# Expected output: /xxx/xxx/.nvm/versions/node/v24.11.1/bin/qwen
# Or similar paths, but it should be a symbolic link
# 6. For details of the symbolic link, you can see the specific source code path
ls -la $(dirname $(which qwen))/../lib/node_modules/@qwen-code/qwen-code
# It should show that this is a symbolic link pointing to your source code directory
# 7.Test the version of qwen
qwen -v
# npm link will overwrite the global qwen. To avoid being unable to distinguish the same version number, you can uninstall the global CLI first
```
#### 3、Create your sandbox Dockerfile under the root directory of your own project
- Path: `.qwen/sandbox.Dockerfile`
- Official mirror image address:https://github.com/QwenLM/qwen-code/pkgs/container/qwen-code
```bash
# Based on the official Qwen sandbox image (It is recommended to explicitly specify the version)
FROM ghcr.io/qwenlm/qwen-code:sha-570ec43
# Add your extra tools here
RUN apt-get update && apt-get install -y \
git \
python3 \
ripgrep
```
#### 4、Create the first sandbox image under the root directory of your project
```bash
GEMINI_SANDBOX=docker BUILD_SANDBOX=1 qwen -s
# Observe whether the sandbox version of the tool you launched is consistent with the version of your custom image. If they are consistent, the startup will be successful
```
This builds a project-specific image based on the default sandbox image.
#### Remove npm link
- If you want to restore the official CLI of qwen, please remove the npm link
```bash
# Method 1: Unlink globally
npm unlink -g @qwen-code/qwen-code
# Method 2: Remove it in the packages/cli directory
cd packages/cli
npm unlink
# Verification has been lifted
which qwen
# It should display "qwen not found"
# Reinstall the global version if necessary
npm install -g @qwen-code/qwen-code
# Verification Recovery
which qwen
qwen --version
```

View File

@@ -12,7 +12,6 @@ export default {
},
'integration-vscode': 'Visual Studio Code',
'integration-zed': 'Zed IDE',
'integration-jetbrains': 'JetBrains IDEs',
'integration-github-action': 'Github Actions',
'Code with Qwen Code': {
type: 'separator',

View File

@@ -5,13 +5,11 @@ Qwen Code supports two authentication methods. Pick the one that matches how you
- **Qwen OAuth (recommended)**: sign in with your `qwen.ai` account in a browser.
- **OpenAI-compatible API**: use an API key (OpenAI or any OpenAI-compatible provider / endpoint).
![](https://img.alicdn.com/imgextra/i2/O1CN01IxI1bt1sNO543AVTT_!!6000000005754-0-tps-1958-822.jpg)
## Option 1: Qwen OAuth (recommended & free) 👍
Use this if you want the simplest setup and you're using Qwen models.
Use this if you want the simplest setup and youre using Qwen models.
- **How it works**: on first start, Qwen Code opens a browser login page. After you finish, credentials are cached locally so you usually won't need to log in again.
- **How it works**: on first start, Qwen Code opens a browser login page. After you finish, credentials are cached locally so you usually wont need to log in again.
- **Requirements**: a `qwen.ai` account + internet access (at least for the first login).
- **Benefits**: no API key management, automatic credential refresh.
- **Cost & quota**: free, with a quota of **60 requests/minute** and **2,000 requests/day**.
@@ -26,54 +24,15 @@ qwen
Use this if you want to use OpenAI models or any provider that exposes an OpenAI-compatible API (e.g. OpenAI, Azure OpenAI, OpenRouter, ModelScope, Alibaba Cloud Bailian, or a self-hosted compatible endpoint).
### Recommended: Coding Plan (subscription-based) 🚀
### Quick start (interactive, recommended for local use)
Use this if you want predictable costs with higher usage quotas for the qwen3-coder-plus model.
When you choose the OpenAI-compatible option in the CLI, it will prompt you for:
> [!IMPORTANT]
>
> Coding Plan is only available for users in China mainland (Beijing region).
- **API key**
- **Base URL** (default: `https://api.openai.com/v1`)
- **Model** (default: `gpt-4o`)
- **How it works**: subscribe to the Coding Plan with a fixed monthly fee, then configure Qwen Code to use the dedicated endpoint and your subscription API key.
- **Requirements**: an active Coding Plan subscription from [Alibaba Cloud Bailian](https://bailian.console.aliyun.com/cn-beijing/?tab=globalset#/efm/coding_plan).
- **Benefits**: higher usage quotas, predictable monthly costs, access to latest qwen3-coder-plus model.
- **Cost & quota**: varies by plan (see table below).
#### Coding Plan Pricing & Quotas
| Feature | Lite Basic Plan | Pro Advanced Plan |
| :------------------ | :-------------------- | :-------------------- |
| **Price** | ¥40/month | ¥200/month |
| **5-Hour Limit** | Up to 1,200 requests | Up to 6,000 requests |
| **Weekly Limit** | Up to 9,000 requests | Up to 45,000 requests |
| **Monthly Limit** | Up to 18,000 requests | Up to 90,000 requests |
| **Supported Model** | qwen3-coder-plus | qwen3-coder-plus |
#### Quick Setup for Coding Plan
When you select the OpenAI-compatible option in the CLI, enter these values:
- **API key**: `sk-sp-xxxxx`
- **Base URL**: `https://coding.dashscope.aliyuncs.com/v1`
- **Model**: `qwen3-coder-plus`
> **Note**: Coding Plan API keys have the format `sk-sp-xxxxx`, which is different from standard Alibaba Cloud API keys.
#### Configure via Environment Variables
Set these environment variables to use Coding Plan:
```bash
export OPENAI_API_KEY="your-coding-plan-api-key" # Format: sk-sp-xxxxx
export OPENAI_BASE_URL="https://coding.dashscope.aliyuncs.com/v1"
export OPENAI_MODEL="qwen3-coder-plus"
```
For more details about Coding Plan, including subscription options and troubleshooting, see the [full Coding Plan documentation](https://bailian.console.aliyun.com/cn-beijing/?tab=doc#/doc/?type=model&url=3005961).
### Other OpenAI-compatible Providers
If you are using other providers (OpenAI, Azure, local LLMs, etc.), use the following configuration methods.
> **Note:** the CLI may display the key in plain text for verification. Make sure your terminal is not being recorded or shared.
### Configure via command-line arguments

View File

@@ -104,7 +104,7 @@ Settings are organized into categories. All settings should be placed within the
| `model.name` | string | The Qwen model to use for conversations. | `undefined` |
| `model.maxSessionTurns` | number | Maximum number of user/model/tool turns to keep in a session. -1 means unlimited. | `-1` |
| `model.summarizeToolOutput` | object | Enables or disables the summarization of tool output. You can specify the token budget for the summarization using the `tokenBudget` setting. Note: Currently only the `run_shell_command` tool is supported. For example `{"run_shell_command": {"tokenBudget": 2000}}` | `undefined` |
| `model.generationConfig` | object | Advanced overrides passed to the underlying content generator. Supports request controls such as `timeout`, `maxRetries`, `disableCacheControl`, and `customHeaders` (custom HTTP headers for API requests), along with fine-tuning knobs under `samplingParams` (for example `temperature`, `top_p`, `max_tokens`). Leave unset to rely on provider defaults. | `undefined` |
| `model.generationConfig` | object | Advanced overrides passed to the underlying content generator. Supports request controls such as `timeout`, `maxRetries`, and `disableCacheControl`, along with fine-tuning knobs under `samplingParams` (for example `temperature`, `top_p`, `max_tokens`). Leave unset to rely on provider defaults. | `undefined` |
| `model.chatCompression.contextPercentageThreshold` | number | Sets the threshold for chat history compression as a percentage of the model's total token limit. This is a value between 0 and 1 that applies to both automatic compression and the manual `/compress` command. For example, a value of `0.6` will trigger compression when the chat history exceeds 60% of the token limit. Use `0` to disable compression entirely. | `0.7` |
| `model.skipNextSpeakerCheck` | boolean | Skip the next speaker check. | `false` |
| `model.skipLoopDetection` | boolean | Disables loop detection checks. Loop detection prevents infinite loops in AI responses but can generate false positives that interrupt legitimate workflows. Enable this option if you experience frequent false positive loop detection interruptions. | `false` |
@@ -114,16 +114,12 @@ Settings are organized into categories. All settings should be placed within the
**Example model.generationConfig:**
```json
```
{
"model": {
"generationConfig": {
"timeout": 60000,
"disableCacheControl": false,
"customHeaders": {
"X-Request-ID": "req-123",
"X-User-ID": "user-456"
},
"samplingParams": {
"temperature": 0.2,
"top_p": 0.8,
@@ -134,113 +130,19 @@ Settings are organized into categories. All settings should be placed within the
}
```
The `customHeaders` field allows you to add custom HTTP headers to all API requests. This is useful for request tracing, monitoring, API gateway routing, or when different models require different headers. If `customHeaders` is defined in `modelProviders[].generationConfig.customHeaders`, it will be used directly; otherwise, headers from `model.generationConfig.customHeaders` will be used. No merging occurs between the two levels.
**model.openAILoggingDir examples:**
- `"~/qwen-logs"` - Logs to `~/qwen-logs` directory
- `"./custom-logs"` - Logs to `./custom-logs` relative to current directory
- `"/tmp/openai-logs"` - Logs to absolute path `/tmp/openai-logs`
#### modelProviders
Use `modelProviders` to declare curated model lists per auth type that the `/model` picker can switch between. Keys must be valid auth types (`openai`, `anthropic`, `gemini`, `vertex-ai`, etc.). Each entry requires an `id` and **must include `envKey`**, with optional `name`, `description`, `baseUrl`, and `generationConfig`. Credentials are never persisted in settings; the runtime reads them from `process.env[envKey]`. Qwen OAuth models remain hard-coded and cannot be overridden.
##### Example
```json
{
"modelProviders": {
"openai": [
{
"id": "gpt-4o",
"name": "GPT-4o",
"envKey": "OPENAI_API_KEY",
"baseUrl": "https://api.openai.com/v1",
"generationConfig": {
"timeout": 60000,
"maxRetries": 3,
"customHeaders": {
"X-Model-Version": "v1.0",
"X-Request-Priority": "high"
},
"samplingParams": { "temperature": 0.2 }
}
}
],
"anthropic": [
{
"id": "claude-3-5-sonnet",
"envKey": "ANTHROPIC_API_KEY",
"baseUrl": "https://api.anthropic.com/v1"
}
],
"gemini": [
{
"id": "gemini-2.0-flash",
"name": "Gemini 2.0 Flash",
"envKey": "GEMINI_API_KEY",
"baseUrl": "https://generativelanguage.googleapis.com"
}
],
"vertex-ai": [
{
"id": "gemini-1.5-pro-vertex",
"envKey": "GOOGLE_API_KEY",
"baseUrl": "https://generativelanguage.googleapis.com"
}
]
}
}
```
> [!note]
> Only the `/model` command exposes non-default auth types. Anthropic, Gemini, Vertex AI, etc., must be defined via `modelProviders`. The `/auth` command intentionally lists only the built-in Qwen OAuth and OpenAI flows.
##### Resolution layers and atomicity
The effective auth/model/credential values are chosen per field using the following precedence (first present wins). You can combine `--auth-type` with `--model` to point directly at a provider entry; these CLI flags run before other layers.
| Layer (highest → lowest) | authType | model | apiKey | baseUrl | apiKeyEnvKey | proxy |
| -------------------------- | ----------------------------------- | ----------------------------------------------- | --------------------------------------------------- | ---------------------------------------------------- | ---------------------- | --------------------------------- |
| Programmatic overrides | `/auth ` | `/auth` input | `/auth` input | `/auth` input | — | — |
| Model provider selection | — | `modelProvider.id` | `env[modelProvider.envKey]` | `modelProvider.baseUrl` | `modelProvider.envKey` | — |
| CLI arguments | `--auth-type` | `--model` | `--openaiApiKey` (or provider-specific equivalents) | `--openaiBaseUrl` (or provider-specific equivalents) | — | — |
| Environment variables | — | Provider-specific mapping (e.g. `OPENAI_MODEL`) | Provider-specific mapping (e.g. `OPENAI_API_KEY`) | Provider-specific mapping (e.g. `OPENAI_BASE_URL`) | — | — |
| Settings (`settings.json`) | `security.auth.selectedType` | `model.name` | `security.auth.apiKey` | `security.auth.baseUrl` | — | — |
| Default / computed | Falls back to `AuthType.QWEN_OAUTH` | Built-in default (OpenAI ⇒ `qwen3-coder-plus`) | — | — | — | `Config.getProxy()` if configured |
\*When present, CLI auth flags override settings. Otherwise, `security.auth.selectedType` or the implicit default determine the auth type. Qwen OAuth and OpenAI are the only auth types surfaced without extra configuration.
Model-provider sourced values are applied atomically: once a provider model is active, every field it defines is protected from lower layers until you manually clear credentials via `/auth`. The final `generationConfig` is the projection across all layers—lower layers only fill gaps left by higher ones, and the provider layer remains impenetrable.
The merge strategy for `modelProviders` is REPLACE: the entire `modelProviders` from project settings will override the corresponding section in user settings, rather than merging the two.
##### Generation config layering
Per-field precedence for `generationConfig`:
1. Programmatic overrides (e.g. runtime `/model`, `/auth` changes)
2. `modelProviders[authType][].generationConfig`
3. `settings.model.generationConfig`
4. Content-generator defaults (`getDefaultGenerationConfig` for OpenAI, `getParameterValue` for Gemini, etc.)
`samplingParams` and `customHeaders` are both treated atomically; provider values replace the entire object. If `modelProviders[].generationConfig` defines these fields, they are used directly; otherwise, values from `model.generationConfig` are used. No merging occurs between provider and global configuration levels. Defaults from the content generator apply last so each provider retains its tuned baseline.
##### Selection persistence and recommendations
> [!important]
> Define `modelProviders` in the user-scope `~/.qwen/settings.json` whenever possible and avoid persisting credential overrides in any scope. Keeping the provider catalog in user settings prevents merge/override conflicts between project and user scopes and ensures `/auth` and `/model` updates always write back to a consistent scope.
- `/model` and `/auth` persist `model.name` (where applicable) and `security.auth.selectedType` to the closest writable scope that already defines `modelProviders`; otherwise they fall back to the user scope. This keeps workspace/user files in sync with the active provider catalog.
- Without `modelProviders`, the resolver mixes CLI/env/settings layers, which is fine for single-provider setups but cumbersome when frequently switching. Define provider catalogs whenever multi-model workflows are common so that switches stay atomic, source-attributed, and debuggable.
#### context
| Setting | Type | Description | Default |
| ------------------------------------------------- | -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| `context.fileName` | string or array of strings | The name of the context file(s). | `undefined` |
| `context.importFormat` | string | The format to use when importing memory. | `undefined` |
| `context.discoveryMaxDirs` | number | Maximum number of directories to search for memory. | `200` |
| `context.includeDirectories` | array | Additional directories to include in the workspace context. Specifies an array of additional absolute or relative paths to include in the workspace context. Missing directories will be skipped with a warning by default. Paths can use `~` to refer to the user's home directory. This setting can be combined with the `--include-directories` command-line flag. | `[]` |
| `context.loadFromIncludeDirectories` | boolean | Controls the behavior of the `/memory refresh` command. If set to `true`, `QWEN.md` files should be loaded from all directories that are added. If set to `false`, `QWEN.md` should only be loaded from the current directory. | `false` |
| `context.fileFiltering.respectGitIgnore` | boolean | Respect .gitignore files when searching. | `true` |
@@ -287,26 +189,6 @@ If you are experiencing performance issues with file searching (e.g., with `@` c
>
> **Security Note for MCP servers:** These settings use simple string matching on MCP server names, which can be modified. If you're a system administrator looking to prevent users from bypassing this, consider configuring the `mcpServers` at the system settings level such that the user will not be able to configure any MCP servers of their own. This should not be used as an airtight security mechanism.
#### lsp
> [!warning]
> **Experimental Feature**: LSP support is currently experimental and disabled by default. Enable it using the `--experimental-lsp` command line flag.
Language Server Protocol (LSP) settings for code intelligence features like go-to-definition, find references, and diagnostics. See the [LSP documentation](../features/lsp) for more details.
| Setting | Type | Description | Default |
| ------------------ | ---------------- | ---------------------------------------------------------------------------------------------------- | ----------- |
| `lsp.enabled` | boolean | Enable/disable LSP support. Has no effect unless `--experimental-lsp` is provided. | `false` |
| `lsp.autoDetect` | boolean | Automatically detect and start language servers based on project files. | `true` |
| `lsp.serverTimeout`| number | LSP server startup timeout in milliseconds. | `10000` |
| `lsp.allowed` | array of strings | An allowlist of LSP servers to allow. Empty means allow all detected servers. | `[]` |
| `lsp.excluded` | array of strings | A denylist of LSP servers to exclude. A server listed in both is excluded. | `[]` |
| `lsp.languageServers` | object | Custom language server configurations. See the [LSP documentation](../features/lsp#custom-language-servers) for configuration format. | `{}` |
> [!note]
>
> **Security Note for LSP servers:** LSP servers run with your user permissions and can execute code. They are only started in trusted workspaces by default. You can configure per-server trust requirements in the `.lsp.json` configuration file.
#### security
| Setting | Type | Description | Default |
@@ -330,12 +212,6 @@ Language Server Protocol (LSP) settings for code intelligence features like go-t
>
> **Note about advanced.tavilyApiKey:** This is a legacy configuration format. For Qwen OAuth users, DashScope provider is automatically available without any configuration. For other authentication types, configure Tavily or Google providers using the new `webSearch` configuration format.
#### experimental
| Setting | Type | Description | Default |
| --------------------- | ------- | -------------------------------- | ------- |
| `experimental.skills` | boolean | Enable experimental Agent Skills | `false` |
#### mcpServers
Configures connections to one or more Model-Context Protocol (MCP) servers for discovering and using custom tools. Qwen Code attempts to connect to each configured MCP server to discover available tools. If multiple MCP servers expose a tool with the same name, the tool names will be prefixed with the server alias you defined in the configuration (e.g., `serverAlias__actualToolName`) to avoid conflicts. Note that the system might strip certain schema properties from MCP tool definitions for compatibility. At least one of `command`, `url`, or `httpUrl` must be provided. If multiple are specified, the order of precedence is `httpUrl`, then `url`, then `command`.
@@ -505,9 +381,8 @@ Arguments passed directly when running the CLI can override other configurations
| `--telemetry-otlp-protocol` | | Sets the OTLP protocol for telemetry (`grpc` or `http`). | | Defaults to `grpc`. See [telemetry](../../developers/development/telemetry) for more information. |
| `--telemetry-log-prompts` | | Enables logging of prompts for telemetry. | | See [telemetry](../../developers/development/telemetry) for more information. |
| `--checkpointing` | | Enables [checkpointing](../features/checkpointing). | | |
| `--acp` | | Enables ACP mode (Agent Client Protocol). Useful for IDE/editor integrations like [Zed](../integration-zed). | | Stable. Replaces the deprecated `--experimental-acp` flag. |
| `--acp` | | Enables ACP mode (Agent Control Protocol). Useful for IDE/editor integrations like [Zed](../integration-zed). | | Stable. Replaces the deprecated `--experimental-acp` flag. |
| `--experimental-skills` | | Enables experimental [Agent Skills](../features/skills) (registers the `skill` tool and loads Skills from `.qwen/skills/` and `~/.qwen/skills/`). | | Experimental. |
| `--experimental-lsp` | | Enables experimental [LSP (Language Server Protocol)](../features/lsp) feature for code intelligence (go-to-definition, find references, diagnostics, etc.). | | Experimental. Requires language servers to be installed. |
| `--extensions` | `-e` | Specifies a list of extensions to use for the session. | Extension names | If not provided, all available extensions are used. Use the special term `qwen -e none` to disable all extensions. Example: `qwen -e my-extension -e my-other-extension` |
| `--list-extensions` | `-l` | Lists all available extensions and exits. | | |
| `--proxy` | | Sets the proxy for the CLI. | Proxy URL | Example: `--proxy http://localhost:7890`. |
@@ -555,13 +430,16 @@ Here's a conceptual example of what a context file at the root of a TypeScript p
This example demonstrates how you can provide general project context, specific coding conventions, and even notes about particular files or components. The more relevant and precise your context files are, the better the AI can assist you. Project-specific context files are highly encouraged to establish conventions and context.
- **Hierarchical Loading and Precedence:** The CLI implements a hierarchical memory system by loading context files (e.g., `QWEN.md`) from several locations. Content from files lower in this list (more specific) typically overrides or supplements content from files higher up (more general). The exact concatenation order and final context can be inspected using the `/memory show` command. The typical loading order is:
- **Hierarchical Loading and Precedence:** The CLI implements a sophisticated hierarchical memory system by loading context files (e.g., `QWEN.md`) from several locations. Content from files lower in this list (more specific) typically overrides or supplements content from files higher up (more general). The exact concatenation order and final context can be inspected using the `/memory show` command. The typical loading order is:
1. **Global Context File:**
- Location: `~/.qwen/<configured-context-filename>` (e.g., `~/.qwen/QWEN.md` in your user home directory).
- Scope: Provides default instructions for all your projects.
2. **Project Root & Ancestors Context Files:**
- Location: The CLI searches for the configured context file in the current working directory and then in each parent directory up to either the project root (identified by a `.git` folder) or your home directory.
- Scope: Provides context relevant to the entire project or a significant portion of it.
3. **Sub-directory Context Files (Contextual/Local):**
- Location: The CLI also scans for the configured context file in subdirectories _below_ the current working directory (respecting common ignore patterns like `node_modules`, `.git`, etc.). The breadth of this search is limited to 200 directories by default, but can be configured with the `context.discoveryMaxDirs` setting in your `settings.json` file.
- Scope: Allows for highly specific instructions relevant to a particular component, module, or subsection of your project.
- **Concatenation & UI Indication:** The contents of all found context files are concatenated (with separators indicating their origin and path) and provided as part of the system prompt. The CLI footer displays the count of loaded context files, giving you a quick visual cue about the active instructional context.
- **Importing Content:** You can modularize your context files by importing other Markdown files using the `@path/to/file.md` syntax. For more details, see the [Memory Import Processor documentation](../configuration/memory).
- **Commands for Memory Management:**

View File

@@ -8,7 +8,6 @@ export default {
},
'approval-mode': 'Approval Mode',
mcp: 'MCP',
lsp: 'LSP (Language Server Protocol)',
'token-caching': 'Token Caching',
sandbox: 'Sandboxing',
language: 'i18n',

View File

@@ -59,7 +59,6 @@ Commands for managing AI tools and models.
| ---------------- | --------------------------------------------- | --------------------------------------------- |
| `/mcp` | List configured MCP servers and tools | `/mcp`, `/mcp desc` |
| `/tools` | Display currently available tool list | `/tools`, `/tools desc` |
| `/skills` | List and run available skills (experimental) | `/skills`, `/skills <name>` |
| `/approval-mode` | Change approval mode for tool usage | `/approval-mode <mode (auto-edit)> --project` |
| →`plan` | Analysis only, no execution | Secure review |
| →`default` | Require approval for edits | Daily use |

View File

@@ -1,383 +0,0 @@
# Language Server Protocol (LSP) Support
Qwen Code provides native Language Server Protocol (LSP) support, enabling advanced code intelligence features like go-to-definition, find references, diagnostics, and code actions. This integration allows the AI agent to understand your code more deeply and provide more accurate assistance.
## Overview
LSP support in Qwen Code works by connecting to language servers that understand your code. When you work with TypeScript, Python, Go, or other supported languages, Qwen Code can automatically start the appropriate language server and use it to:
- Navigate to symbol definitions
- Find all references to a symbol
- Get hover information (documentation, type info)
- View diagnostic messages (errors, warnings)
- Access code actions (quick fixes, refactorings)
- Analyze call hierarchies
## Quick Start
LSP is enabled by default in Qwen Code. For most common languages, Qwen Code will automatically detect and start the appropriate language server if it's installed on your system.
### Prerequisites
You need to have the language server for your programming language installed:
| Language | Language Server | Install Command |
|----------|----------------|-----------------|
| TypeScript/JavaScript | typescript-language-server | `npm install -g typescript-language-server typescript` |
| Python | pylsp | `pip install python-lsp-server` |
| Go | gopls | `go install golang.org/x/tools/gopls@latest` |
| Rust | rust-analyzer | [Installation guide](https://rust-analyzer.github.io/manual.html#installation) |
## Configuration
### Settings
You can configure LSP behavior in your `settings.json`:
```json
{
"lsp": {
"enabled": true,
"autoDetect": true,
"serverTimeout": 10000,
"allowed": [],
"excluded": []
}
}
```
| Setting | Type | Default | Description |
|---------|------|---------|-------------|
| `lsp.enabled` | boolean | `true` | Enable/disable LSP support |
| `lsp.autoDetect` | boolean | `true` | Automatically detect and start language servers |
| `lsp.serverTimeout` | number | `10000` | Server startup timeout in milliseconds |
| `lsp.allowed` | string[] | `[]` | Allow only these servers (empty = allow all) |
| `lsp.excluded` | string[] | `[]` | Exclude these servers from starting |
### Custom Language Servers
You can configure custom language servers using a `.lsp.json` file in your project root:
```json
{
"languageServers": {
"my-custom-lsp": {
"languages": ["mylang"],
"command": "my-lsp-server",
"args": ["--stdio"],
"transport": "stdio",
"initializationOptions": {},
"settings": {}
}
}
}
```
#### Configuration Options
| Option | Type | Required | Description |
|--------|------|----------|-------------|
| `languages` | string[] | Yes | Languages this server handles |
| `command` | string | Yes* | Command to start the server |
| `args` | string[] | No | Command line arguments |
| `transport` | string | No | Transport type: `stdio` (default), `tcp`, or `socket` |
| `env` | object | No | Environment variables |
| `initializationOptions` | object | No | LSP initialization options |
| `settings` | object | No | Server settings |
| `workspaceFolder` | string | No | Override workspace folder |
| `startupTimeout` | number | No | Startup timeout in ms |
| `shutdownTimeout` | number | No | Shutdown timeout in ms |
| `restartOnCrash` | boolean | No | Auto-restart on crash |
| `maxRestarts` | number | No | Maximum restart attempts |
| `trustRequired` | boolean | No | Require trusted workspace |
*Required for `stdio` transport
#### TCP/Socket Transport
For servers that use TCP or Unix socket transport:
```json
{
"languageServers": {
"remote-lsp": {
"languages": ["custom"],
"transport": "tcp",
"socket": {
"host": "127.0.0.1",
"port": 9999
}
}
}
}
```
## Available LSP Operations
Qwen Code exposes LSP functionality through the unified `lsp` tool. Here are the available operations:
### Code Navigation
#### Go to Definition
Find where a symbol is defined.
```
Operation: goToDefinition
Parameters:
- filePath: Path to the file
- line: Line number (1-based)
- character: Column number (1-based)
```
#### Find References
Find all references to a symbol.
```
Operation: findReferences
Parameters:
- filePath: Path to the file
- line: Line number (1-based)
- character: Column number (1-based)
- includeDeclaration: Include the declaration itself (optional)
```
#### Go to Implementation
Find implementations of an interface or abstract method.
```
Operation: goToImplementation
Parameters:
- filePath: Path to the file
- line: Line number (1-based)
- character: Column number (1-based)
```
### Symbol Information
#### Hover
Get documentation and type information for a symbol.
```
Operation: hover
Parameters:
- filePath: Path to the file
- line: Line number (1-based)
- character: Column number (1-based)
```
#### Document Symbols
Get all symbols in a document.
```
Operation: documentSymbol
Parameters:
- filePath: Path to the file
```
#### Workspace Symbol Search
Search for symbols across the workspace.
```
Operation: workspaceSymbol
Parameters:
- query: Search query string
- limit: Maximum results (optional)
```
### Call Hierarchy
#### Prepare Call Hierarchy
Get the call hierarchy item at a position.
```
Operation: prepareCallHierarchy
Parameters:
- filePath: Path to the file
- line: Line number (1-based)
- character: Column number (1-based)
```
#### Incoming Calls
Find all functions that call the given function.
```
Operation: incomingCalls
Parameters:
- callHierarchyItem: Item from prepareCallHierarchy
```
#### Outgoing Calls
Find all functions called by the given function.
```
Operation: outgoingCalls
Parameters:
- callHierarchyItem: Item from prepareCallHierarchy
```
### Diagnostics
#### File Diagnostics
Get diagnostic messages (errors, warnings) for a file.
```
Operation: diagnostics
Parameters:
- filePath: Path to the file
```
#### Workspace Diagnostics
Get all diagnostic messages across the workspace.
```
Operation: workspaceDiagnostics
Parameters:
- limit: Maximum results (optional)
```
### Code Actions
#### Get Code Actions
Get available code actions (quick fixes, refactorings) at a location.
```
Operation: codeActions
Parameters:
- filePath: Path to the file
- line: Start line number (1-based)
- character: Start column number (1-based)
- endLine: End line number (optional, defaults to line)
- endCharacter: End column (optional, defaults to character)
- diagnostics: Diagnostics to get actions for (optional)
- codeActionKinds: Filter by action kind (optional)
```
Code action kinds:
- `quickfix` - Quick fixes for errors/warnings
- `refactor` - Refactoring operations
- `refactor.extract` - Extract to function/variable
- `refactor.inline` - Inline function/variable
- `source` - Source code actions
- `source.organizeImports` - Organize imports
- `source.fixAll` - Fix all auto-fixable issues
## Security
LSP servers are only started in trusted workspaces by default. This is because language servers run with your user permissions and can execute code.
### Trust Controls
- **Trusted Workspace**: LSP servers start automatically
- **Untrusted Workspace**: LSP servers won't start unless `trustRequired: false`
To mark a workspace as trusted, use the `/trust` command or configure trusted folders in settings.
### Server Allowlists
You can restrict which servers are allowed to run:
```json
{
"lsp": {
"allowed": ["typescript-language-server", "gopls"],
"excluded": ["untrusted-server"]
}
}
```
## Troubleshooting
### Server Not Starting
1. **Check if the server is installed**: Run the command manually to verify
2. **Check the PATH**: Ensure the server binary is in your system PATH
3. **Check workspace trust**: The workspace must be trusted for LSP
4. **Check logs**: Look for error messages in the console output
### Slow Performance
1. **Large projects**: Consider excluding `node_modules` and other large directories
2. **Server timeout**: Increase `lsp.serverTimeout` for slow servers
3. **Multiple servers**: Exclude unused language servers
### No Results
1. **Server not ready**: The server may still be indexing
2. **File not saved**: Save your file for the server to pick up changes
3. **Wrong language**: Check if the correct server is running for your language
### Debugging
Enable debug logging to see LSP communication:
```bash
DEBUG=lsp* qwen
```
Or check the LSP debugging guide at `packages/cli/LSP_DEBUGGING_GUIDE.md`.
## Claude Code Compatibility
Qwen Code supports Claude Code-style `.lsp.json` configuration files. If you're migrating from Claude Code, your existing LSP configuration should work with minimal changes.
### Legacy Format
The legacy format (used by earlier versions) is still supported but deprecated:
```json
{
"typescript": {
"command": "typescript-language-server",
"args": ["--stdio"],
"transport": "stdio"
}
}
```
We recommend migrating to the new `languageServers` format:
```json
{
"languageServers": {
"typescript-language-server": {
"languages": ["typescript", "javascript"],
"command": "typescript-language-server",
"args": ["--stdio"],
"transport": "stdio"
}
}
}
```
## Best Practices
1. **Install language servers globally**: This ensures they're available in all projects
2. **Use project-specific settings**: Configure server options per project when needed
3. **Keep servers updated**: Update your language servers regularly for best results
4. **Trust wisely**: Only trust workspaces from trusted sources
## FAQ
### Q: How do I know which language servers are running?
Use the `/lsp status` command to see all configured and running language servers.
### Q: Can I use multiple language servers for the same file type?
Yes, but only one will be used for each operation. The first server that returns results wins.
### Q: Does LSP work in sandbox mode?
LSP servers run outside the sandbox to access your code. They're subject to workspace trust controls.
### Q: How do I disable LSP for a specific project?
Add to your project's `.qwen/settings.json`:
```json
{
"lsp": {
"enabled": false
}
}
```

View File

@@ -49,8 +49,6 @@ Cross-platform sandboxing with complete process isolation.
By default, Qwen Code uses a published sandbox image (configured in the CLI package) and will pull it as needed.
The container sandbox mounts your workspace and your `~/.qwen` directory into the container so auth and settings persist between runs.
**Best for**: Strong isolation on any OS, consistent tooling inside a known image.
### Choosing a method
@@ -159,13 +157,22 @@ For a working allowlist-style proxy example, see: [Example Proxy Script](/develo
## Linux UID/GID handling
On Linux, Qwen Code defaults to enabling UID/GID mapping so the sandbox runs as your user (and reuses the mounted `~/.qwen`). Override with:
The sandbox automatically handles user permissions on Linux. Override these permissions with:
```bash
export SANDBOX_SET_UID_GID=true # Force host UID/GID
export SANDBOX_SET_UID_GID=false # Disable UID/GID mapping
```
## Customizing the sandbox environment (Docker/Podman)
If you need extra tools inside the container (e.g., `git`, `python`, `rg`), create a custom Dockerfile:
- Path: `.qwen/sandbox.Dockerfile`
- Then run with: `BUILD_SANDBOX=1 qwen -s ...`
This builds a project-specific image based on the default sandbox image.
## Troubleshooting
### Common issues

View File

@@ -11,29 +11,12 @@ This guide shows you how to create, use, and manage Agent Skills in **Qwen Code*
## Prerequisites
- Qwen Code (recent version)
## How to enable
### Via CLI flag
- Run with the experimental flag enabled:
```bash
qwen --experimental-skills
```
### Via settings.json
Add to your `~/.qwen/settings.json` or project's `.qwen/settings.json`:
```json
{
"tools": {
"experimental": {
"skills": true
}
}
}
```
- Basic familiarity with Qwen Code ([Quickstart](../quickstart.md))
## What are Agent Skills?
@@ -44,14 +27,6 @@ Agent Skills package expertise into discoverable capabilities. Each Skill consis
Skills are **model-invoked** — the model autonomously decides when to use them based on your request and the Skills description. This is different from slash commands, which are **user-invoked** (you explicitly type `/command`).
If you want to invoke a Skill explicitly, use the `/skills` slash command:
```bash
/skills <skill-name>
```
The `/skills` command is only available when you run with `--experimental-skills`. Use autocomplete to browse available Skills and descriptions.
### Benefits
- Extend Qwen Code for your workflows

View File

@@ -1,57 +0,0 @@
# JetBrains IDEs
> JetBrains IDEs provide native support for AI coding assistants through the Agent Client Protocol (ACP). This integration allows you to use Qwen Code directly within your JetBrains IDE with real-time code suggestions.
### Features
- **Native agent experience**: Integrated AI assistant panel within your JetBrains IDE
- **Agent Client Protocol**: Full support for ACP enabling advanced IDE interactions
- **Symbol management**: #-mention files to add them to the conversation context
- **Conversation history**: Access to past conversations within the IDE
### Requirements
- JetBrains IDE with ACP support (IntelliJ IDEA, WebStorm, PyCharm, etc.)
- Qwen Code CLI installed
### Installation
1. Install Qwen Code CLI:
```bash
npm install -g @qwen-code/qwen-code
```
2. Open your JetBrains IDE and navigate to AI Chat tool window.
3. Click the 3-dot menu in the upper-right corner and select **Configure ACP Agent** and configure Qwen Code with the following settings:
```json
{
"agent_servers": {
"qwen": {
"command": "/path/to/qwen",
"args": ["--acp"],
"env": {}
}
}
}
```
4. The Qwen Code agent should now be available in the AI Assistant panel
![Qwen Code in JetBrains AI Chat](https://img.alicdn.com/imgextra/i3/O1CN01ZxYel21y433Ci6eg0_!!6000000006524-2-tps-2774-1494.png)
## Troubleshooting
### Agent not appearing
- Run `qwen --version` in terminal to verify installation
- Ensure your JetBrains IDE version supports ACP
- Restart your JetBrains IDE
### Qwen Code not responding
- Check your internet connection
- Verify CLI works by running `qwen` in terminal
- [File an issue on GitHub](https://github.com/qwenlm/qwen-code/issues) if the problem persists

View File

@@ -18,17 +18,23 @@
### Requirements
- VS Code 1.85.0 or higher
- VS Code 1.98.0 or higher
### Installation
Download and install the extension from the [Visual Studio Code Extension Marketplace](https://marketplace.visualstudio.com/items?itemName=qwenlm.qwen-code-vscode-ide-companion).
1. Install Qwen Code CLI:
```bash
npm install -g qwen-code
```
2. Download and install the extension from the [Visual Studio Code Extension Marketplace](https://marketplace.visualstudio.com/items?itemName=qwenlm.qwen-code-vscode-ide-companion).
## Troubleshooting
### Extension not installing
- Ensure you have VS Code 1.85.0 or higher
- Ensure you have VS Code 1.98.0 or higher
- Check that VS Code has permission to install extensions
- Try installing directly from the Marketplace website

View File

@@ -1,6 +1,6 @@
# Zed Editor
> Zed Editor provides native support for AI coding assistants through the Agent Client Protocol (ACP). This integration allows you to use Qwen Code directly within Zed's interface with real-time code suggestions.
> Zed Editor provides native support for AI coding assistants through the Agent Control Protocol (ACP). This integration allows you to use Qwen Code directly within Zed's interface with real-time code suggestions.
![Zed Editor Overview](https://img.alicdn.com/imgextra/i1/O1CN01aAhU311GwEoNh27FP_!!6000000000686-2-tps-3024-1898.png)
@@ -20,9 +20,9 @@
1. Install Qwen Code CLI:
```bash
npm install -g @qwen-code/qwen-code
```
```bash
npm install -g qwen-code
```
2. Download and install [Zed Editor](https://zed.dev/)

View File

@@ -1,6 +1,5 @@
# Qwen Code overview
[![@qwen-code/qwen-code downloads](https://img.shields.io/npm/dw/@qwen-code/qwen-code.svg)](https://npm-compare.com/@qwen-code/qwen-code)
[![@qwen-code/qwen-code downloads](https://img.shields.io/npm/dw/@qwen-code/qwen-code.svg)](https://npm-compare.com/@qwen-code/qwen-code)
[![@qwen-code/qwen-code version](https://img.shields.io/npm/v/@qwen-code/qwen-code.svg)](https://www.npmjs.com/package/@qwen-code/qwen-code)
> Learn about Qwen Code, Qwen's agentic coding tool that lives in your terminal and helps you turn ideas into code faster than ever before.

View File

@@ -159,7 +159,7 @@ Qwen Code will:
### Test out other common workflows
There are a number of ways to work with Qwen Code:
There are a number of ways to work with Claude:
**Refactor code**

View File

@@ -9,18 +9,11 @@ This guide provides solutions to common issues and debugging tips, including top
## Authentication or login errors
- **Error: `UNABLE_TO_GET_ISSUER_CERT_LOCALLY`, `UNABLE_TO_VERIFY_LEAF_SIGNATURE`, or `unable to get local issuer certificate`**
- **Error: `UNABLE_TO_GET_ISSUER_CERT_LOCALLY` or `unable to get local issuer certificate`**
- **Cause:** You may be on a corporate network with a firewall that intercepts and inspects SSL/TLS traffic. This often requires a custom root CA certificate to be trusted by Node.js.
- **Solution:** Set the `NODE_EXTRA_CA_CERTS` environment variable to the absolute path of your corporate root CA certificate file.
- Example: `export NODE_EXTRA_CA_CERTS=/path/to/your/corporate-ca.crt`
- **Error: `Device authorization flow failed: fetch failed`**
- **Cause:** Node.js could not reach Qwen OAuth endpoints (often a proxy or SSL/TLS trust issue). When available, Qwen Code will also print the underlying error cause (for example: `UNABLE_TO_VERIFY_LEAF_SIGNATURE`).
- **Solution:**
- Confirm you can access `https://chat.qwen.ai` from the same machine/network.
- If you are behind a proxy, set it via `qwen --proxy <url>` (or the `proxy` setting in `settings.json`).
- If your network uses a corporate TLS inspection CA, set `NODE_EXTRA_CA_CERTS` as described above.
- **Issue: Unable to display UI after authentication failure**
- **Cause:** If authentication fails after selecting an authentication type, the `security.auth.selectedType` setting may be persisted in `settings.json`. On restart, the CLI may get stuck trying to authenticate with the failed auth type and fail to display the UI.
- **Solution:** Clear the `security.auth.selectedType` configuration item in your `settings.json` file:

View File

@@ -311,9 +311,9 @@ function setupAcpTest(
}
});
it('returns modes on initialize and allows setting mode and model', async () => {
it('returns modes on initialize and allows setting approval mode', async () => {
const rig = new TestRig();
rig.setup('acp mode and model');
rig.setup('acp approval mode');
const { sendRequest, cleanup, stderr } = setupAcpTest(rig);
@@ -366,14 +366,8 @@ function setupAcpTest(
const newSession = (await sendRequest('session/new', {
cwd: rig.testDir!,
mcpServers: [],
})) as {
sessionId: string;
models: {
availableModels: Array<{ modelId: string }>;
};
};
})) as { sessionId: string };
expect(newSession.sessionId).toBeTruthy();
expect(newSession.models.availableModels.length).toBeGreaterThan(0);
// Test 4: Set approval mode to 'yolo'
const setModeResult = (await sendRequest('session/set_mode', {
@@ -398,15 +392,6 @@ function setupAcpTest(
})) as { modeId: string };
expect(setModeResult3).toBeDefined();
expect(setModeResult3.modeId).toBe('default');
// Test 7: Set model using first available model
const firstModel = newSession.models.availableModels[0];
const setModelResult = (await sendRequest('session/set_model', {
sessionId: newSession.sessionId,
modelId: firstModel.modelId,
})) as { modelId: string };
expect(setModelResult).toBeDefined();
expect(setModelResult.modelId).toBeTruthy();
} catch (e) {
if (stderr.length) {
console.error('Agent stderr:', stderr.join(''));

View File

@@ -831,7 +831,7 @@ describe('Permission Control (E2E)', () => {
TEST_TIMEOUT,
);
it.skip(
it(
'should execute dangerous commands without confirmation',
async () => {
const q = query({

29
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "@qwen-code/qwen-code",
"version": "0.7.1",
"version": "0.6.2-preview.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "@qwen-code/qwen-code",
"version": "0.7.1",
"version": "0.6.2-preview.0",
"workspaces": [
"packages/*"
],
@@ -39,7 +39,6 @@
"globals": "^16.0.0",
"husky": "^9.1.7",
"json": "^11.0.0",
"json-schema": "^0.4.0",
"lint-staged": "^16.1.6",
"memfs": "^4.42.0",
"mnemonist": "^0.40.3",
@@ -6217,7 +6216,10 @@
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/chokidar/-/chokidar-4.0.3.tgz",
"integrity": "sha512-Qgzu8kfBvo+cA4962jnP1KkS6Dop5NS6g7R5LFYJr4b8Ub94PPQXUksCw9PvXoeXPRRddRNC5C1JQUR2SMGtnA==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"dependencies": {
"readdirp": "^4.0.1"
},
@@ -10805,13 +10807,6 @@
"node": "^18.17.0 || >=20.5.0"
}
},
"node_modules/json-schema": {
"version": "0.4.0",
"resolved": "https://registry.npmjs.org/json-schema/-/json-schema-0.4.0.tgz",
"integrity": "sha512-es94M3nTIfsEPisRafak+HDLfHXnKBhV3vU5eqPcS3flIWqcxJWgXHXiey3YrpaNsanY5ei1VoYEbOzijuq9BA==",
"dev": true,
"license": "(AFL-2.1 OR BSD-3-Clause)"
},
"node_modules/json-schema-traverse": {
"version": "0.4.1",
"resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz",
@@ -13887,7 +13882,10 @@
"version": "4.1.2",
"resolved": "https://registry.npmjs.org/readdirp/-/readdirp-4.1.2.tgz",
"integrity": "sha512-GDhwkLfywWL2s6vEjyhri+eXmfH6j1L7JE27WhqLeYzoh/A3DBaYGEj2H/HFZCn/kMfim73FXxEJTw06WtxQwg==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"engines": {
"node": ">= 14.18.0"
},
@@ -17318,7 +17316,7 @@
},
"packages/cli": {
"name": "@qwen-code/qwen-code",
"version": "0.7.1",
"version": "0.6.2-preview.0",
"dependencies": {
"@google/genai": "1.30.0",
"@iarna/toml": "^2.2.5",
@@ -17955,7 +17953,7 @@
},
"packages/core": {
"name": "@qwen-code/qwen-code-core",
"version": "0.7.1",
"version": "0.6.2-preview.0",
"hasInstallScript": true,
"dependencies": {
"@anthropic-ai/sdk": "^0.36.1",
@@ -17976,7 +17974,6 @@
"ajv-formats": "^3.0.0",
"async-mutex": "^0.5.0",
"chardet": "^2.1.0",
"chokidar": "^4.0.3",
"diff": "^7.0.0",
"dotenv": "^17.1.0",
"fast-levenshtein": "^2.0.6",
@@ -18596,7 +18593,7 @@
},
"packages/sdk-typescript": {
"name": "@qwen-code/sdk",
"version": "0.1.3",
"version": "0.1.0",
"license": "Apache-2.0",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.25.1",
@@ -21416,7 +21413,7 @@
},
"packages/test-utils": {
"name": "@qwen-code/qwen-code-test-utils",
"version": "0.7.1",
"version": "0.6.2-preview.0",
"dev": true,
"license": "Apache-2.0",
"devDependencies": {
@@ -21428,7 +21425,7 @@
},
"packages/vscode-ide-companion": {
"name": "qwen-code-vscode-ide-companion",
"version": "0.7.1",
"version": "0.6.2-preview.0",
"license": "LICENSE",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.25.1",

View File

@@ -1,6 +1,6 @@
{
"name": "@qwen-code/qwen-code",
"version": "0.7.1",
"version": "0.6.2-preview.0",
"engines": {
"node": ">=20.0.0"
},
@@ -13,7 +13,7 @@
"url": "git+https://github.com/QwenLM/qwen-code.git"
},
"config": {
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.7.1"
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.6.2-preview.0"
},
"scripts": {
"start": "cross-env node scripts/start.js",
@@ -94,7 +94,6 @@
"globals": "^16.0.0",
"husky": "^9.1.7",
"json": "^11.0.0",
"json-schema": "^0.4.0",
"lint-staged": "^16.1.6",
"memfs": "^4.42.0",
"mnemonist": "^0.40.3",

View File

@@ -1,140 +0,0 @@
# LSP 调试指南
本指南介绍如何调试 packages/cli 中的 LSP (Language Server Protocol) 功能。
## 1. 启用调试模式
CLI 支持调试模式,可以提供额外的日志信息:
```bash
# 使用 debug 标志运行
qwen --debug [你的命令]
# 或设置环境变量
DEBUG=true qwen [你的命令]
DEBUG_MODE=true qwen [你的命令]
```
## 2. LSP 配置选项
LSP 功能通过设置系统配置,包含以下选项:
- `lsp.enabled`: 启用/禁用原生 LSP 客户端(默认为 `false`
- `lsp.allowed`: 允许的 LSP 服务器名称白名单
- `lsp.excluded`: 排除的 LSP 服务器名称黑名单
在 settings.json 中的示例配置:
```json
{
"lsp": {
"enabled": true,
"allowed": ["typescript-language-server", "pylsp"],
"excluded": ["gopls"]
}
}
```
也可以在 `settings.json` 中配置 `lsp.languageServers`,格式与 `.lsp.json` 一致。
## 3. NativeLspService 调试功能
`NativeLspService` 类包含几个调试功能:
### 3.1 控制台日志
服务向控制台输出状态消息:
- `LSP 服务器 ${name} 启动成功` - 服务器成功启动
- `LSP 服务器 ${name} 启动失败` - 服务器启动失败
- `工作区不受信任,跳过 LSP 服务器发现` - 工作区不受信任,跳过发现
### 3.2 错误处理
服务具有全面的错误处理和详细的错误消息
### 3.3 状态跟踪
您可以通过 `getStatus()` 方法检查所有 LSP 服务器的状态
## 4. 调试命令
```bash
# 启用调试运行
qwen --debug --prompt "调试 LSP 功能"
# 检查在您的项目中检测到哪些 LSP 服务器
# 系统会自动检测语言和相应的 LSP 服务器
```
## 5. 手动 LSP 服务器配置
您还可以在项目根目录使用 `.lsp.json` 文件手动配置 LSP 服务器。
推荐使用新格式(以服务器名称为键),旧格式仍然兼容但会提示迁移:
```json
{
"languageServers": {
"pylsp": {
"command": "pylsp",
"args": [],
"languages": ["python"],
"transport": "stdio",
"settings": {},
"workspaceFolder": null,
"startupTimeout": 10000,
"shutdownTimeout": 3000,
"restartOnCrash": true,
"maxRestarts": 3,
"trustRequired": true
}
}
}
```
旧格式示例:
```json
{
"python": {
"command": "pylsp",
"args": [],
"transport": "stdio",
"trustRequired": true
}
}
```
## 6. LSP 问题排查
### 6.1 检查 LSP 服务器是否已安装
- 对于 TypeScript/JavaScript: `typescript-language-server`
- 对于 Python: `pylsp`
- 对于 Go: `gopls`
### 6.2 验证工作区信任
- LSP 服务器可能需要受信任的工作区才能启动
- 检查 `security.folderTrust.enabled` 设置
### 6.3 查看日志
- 查找以 `LSP 服务器` 开头的控制台消息
- 检查命令存在性和路径安全性问题
## 7. LSP 服务启动流程
LSP 服务的启动遵循以下流程:
1. **发现和准备**: `discoverAndPrepare()` 方法检测工作区中的编程语言
2. **创建服务器句柄**: 根据检测到的语言创建对应的服务器句柄
3. **启动服务器**: `start()` 方法启动所有服务器句柄
4. **状态管理**: 服务器状态在 `NOT_STARTED`, `IN_PROGRESS`, `READY`, `FAILED` 之间转换
## 8. 调试技巧
- 使用 `--debug` 标志查看详细的启动过程
- 检查工作区是否受信任(影响 LSP 服务器启动)
- 确认 LSP 服务器命令在系统 PATH 中可用
- 使用 `getStatus()` 方法监控服务器运行状态

View File

@@ -1,6 +1,6 @@
{
"name": "@qwen-code/qwen-code",
"version": "0.7.1",
"version": "0.6.2-preview.0",
"description": "Qwen Code",
"repository": {
"type": "git",
@@ -33,7 +33,7 @@
"dist"
],
"config": {
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.7.1"
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.6.2-preview.0"
},
"dependencies": {
"@google/genai": "1.30.0",

View File

@@ -8,7 +8,6 @@
import { z } from 'zod';
import * as schema from './schema.js';
import { ACP_ERROR_CODES } from './errorCodes.js';
export * from './schema.js';
import type { WritableStream, ReadableStream } from 'node:stream/web';
@@ -71,13 +70,6 @@ export class AgentSideConnection implements Client {
const validatedParams = schema.setModeRequestSchema.parse(params);
return agent.setMode(validatedParams);
}
case schema.AGENT_METHODS.session_set_model: {
if (!agent.setModel) {
throw RequestError.methodNotFound();
}
const validatedParams = schema.setModelRequestSchema.parse(params);
return agent.setModel(validatedParams);
}
default:
throw RequestError.methodNotFound(method);
}
@@ -350,51 +342,27 @@ export class RequestError extends Error {
}
static parseError(details?: string): RequestError {
return new RequestError(
ACP_ERROR_CODES.PARSE_ERROR,
'Parse error',
details,
);
return new RequestError(-32700, 'Parse error', details);
}
static invalidRequest(details?: string): RequestError {
return new RequestError(
ACP_ERROR_CODES.INVALID_REQUEST,
'Invalid request',
details,
);
return new RequestError(-32600, 'Invalid request', details);
}
static methodNotFound(details?: string): RequestError {
return new RequestError(
ACP_ERROR_CODES.METHOD_NOT_FOUND,
'Method not found',
details,
);
return new RequestError(-32601, 'Method not found', details);
}
static invalidParams(details?: string): RequestError {
return new RequestError(
ACP_ERROR_CODES.INVALID_PARAMS,
'Invalid params',
details,
);
return new RequestError(-32602, 'Invalid params', details);
}
static internalError(details?: string): RequestError {
return new RequestError(
ACP_ERROR_CODES.INTERNAL_ERROR,
'Internal error',
details,
);
return new RequestError(-32603, 'Internal error', details);
}
static authRequired(details?: string): RequestError {
return new RequestError(
ACP_ERROR_CODES.AUTH_REQUIRED,
'Authentication required',
details,
);
return new RequestError(-32000, 'Authentication required', details);
}
toResult<T>(): Result<T> {
@@ -440,5 +408,4 @@ export interface Agent {
prompt(params: schema.PromptRequest): Promise<schema.PromptResponse>;
cancel(params: schema.CancelNotification): Promise<void>;
setMode?(params: schema.SetModeRequest): Promise<schema.SetModeResponse>;
setModel?(params: schema.SetModelRequest): Promise<schema.SetModelResponse>;
}

View File

@@ -165,11 +165,30 @@ class GeminiAgent {
this.setupFileSystem(config);
const session = await this.createAndStoreSession(config);
const availableModels = this.buildAvailableModels(config);
const configuredModel = (
config.getModel() ||
this.config.getModel() ||
''
).trim();
const modelId = configuredModel || 'default';
const modelName = configuredModel || modelId;
return {
sessionId: session.getId(),
models: availableModels,
models: {
currentModelId: modelId,
availableModels: [
{
modelId,
name: modelName,
description: null,
_meta: {
contextLimit: tokenLimit(modelId),
},
},
],
_meta: null,
},
};
}
@@ -286,29 +305,15 @@ class GeminiAgent {
async setMode(params: acp.SetModeRequest): Promise<acp.SetModeResponse> {
const session = this.sessions.get(params.sessionId);
if (!session) {
throw acp.RequestError.invalidParams(
`Session not found for id: ${params.sessionId}`,
);
throw new Error(`Session not found: ${params.sessionId}`);
}
return session.setMode(params);
}
async setModel(params: acp.SetModelRequest): Promise<acp.SetModelResponse> {
const session = this.sessions.get(params.sessionId);
if (!session) {
throw acp.RequestError.invalidParams(
`Session not found for id: ${params.sessionId}`,
);
}
return session.setModel(params);
}
private async ensureAuthenticated(config: Config): Promise<void> {
const selectedType = this.settings.merged.security?.auth?.selectedType;
if (!selectedType) {
throw acp.RequestError.authRequired(
'Use Qwen Code CLI to authenticate first.',
);
throw acp.RequestError.authRequired('No Selected Type');
}
try {
@@ -377,43 +382,4 @@ class GeminiAgent {
return session;
}
private buildAvailableModels(
config: Config,
): acp.NewSessionResponse['models'] {
const currentModelId = (
config.getModel() ||
this.config.getModel() ||
''
).trim();
const availableModels = config.getAvailableModels();
const mappedAvailableModels = availableModels.map((model) => ({
modelId: model.id,
name: model.label,
description: model.description ?? null,
_meta: {
contextLimit: tokenLimit(model.id),
},
}));
if (
currentModelId &&
!mappedAvailableModels.some((model) => model.modelId === currentModelId)
) {
mappedAvailableModels.unshift({
modelId: currentModelId,
name: currentModelId,
description: null,
_meta: {
contextLimit: tokenLimit(currentModelId),
},
});
}
return {
currentModelId,
availableModels: mappedAvailableModels,
};
}
}

View File

@@ -1,25 +0,0 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
export const ACP_ERROR_CODES = {
// Parse error: invalid JSON received by server.
PARSE_ERROR: -32700,
// Invalid request: JSON is not a valid Request object.
INVALID_REQUEST: -32600,
// Method not found: method does not exist or is unavailable.
METHOD_NOT_FOUND: -32601,
// Invalid params: invalid method parameter(s).
INVALID_PARAMS: -32602,
// Internal error: implementation-defined server error.
INTERNAL_ERROR: -32603,
// Authentication required: must authenticate before operation.
AUTH_REQUIRED: -32000,
// Resource not found: e.g. missing file.
RESOURCE_NOT_FOUND: -32002,
} as const;
export type AcpErrorCode =
(typeof ACP_ERROR_CODES)[keyof typeof ACP_ERROR_CODES];

View File

@@ -15,7 +15,6 @@ export const AGENT_METHODS = {
session_prompt: 'session/prompt',
session_list: 'session/list',
session_set_mode: 'session/set_mode',
session_set_model: 'session/set_model',
};
export const CLIENT_METHODS = {
@@ -267,18 +266,6 @@ export const modelInfoSchema = z.object({
name: z.string(),
});
export const setModelRequestSchema = z.object({
sessionId: z.string(),
modelId: z.string(),
});
export const setModelResponseSchema = z.object({
modelId: z.string(),
});
export type SetModelRequest = z.infer<typeof setModelRequestSchema>;
export type SetModelResponse = z.infer<typeof setModelResponseSchema>;
export const sessionModelStateSchema = z.object({
_meta: acpMetaSchema,
availableModels: z.array(modelInfoSchema),
@@ -605,7 +592,6 @@ export const agentResponseSchema = z.union([
promptResponseSchema,
listSessionsResponseSchema,
setModeResponseSchema,
setModelResponseSchema,
]);
export const requestPermissionRequestSchema = z.object({
@@ -638,7 +624,6 @@ export const agentRequestSchema = z.union([
promptRequestSchema,
listSessionsRequestSchema,
setModeRequestSchema,
setModelRequestSchema,
]);
export const agentNotificationSchema = sessionNotificationSchema;

View File

@@ -7,7 +7,6 @@
import { describe, expect, it, vi } from 'vitest';
import type { FileSystemService } from '@qwen-code/qwen-code-core';
import { AcpFileSystemService } from './filesystem.js';
import { ACP_ERROR_CODES } from '../errorCodes.js';
const createFallback = (): FileSystemService => ({
readTextFile: vi.fn(),
@@ -17,13 +16,11 @@ const createFallback = (): FileSystemService => ({
describe('AcpFileSystemService', () => {
describe('readTextFile ENOENT handling', () => {
it('converts RESOURCE_NOT_FOUND error to ENOENT', async () => {
const resourceNotFoundError = {
code: ACP_ERROR_CODES.RESOURCE_NOT_FOUND,
message: 'File not found',
};
it('parses path from ACP ENOENT message (quoted)', async () => {
const client = {
readTextFile: vi.fn().mockRejectedValue(resourceNotFoundError),
readTextFile: vi
.fn()
.mockResolvedValue({ content: 'ERROR: ENOENT: "/remote/file.txt"' }),
} as unknown as import('../acp.js').Client;
const svc = new AcpFileSystemService(
@@ -33,20 +30,15 @@ describe('AcpFileSystemService', () => {
createFallback(),
);
await expect(svc.readTextFile('/some/file.txt')).rejects.toMatchObject({
await expect(svc.readTextFile('/local/file.txt')).rejects.toMatchObject({
code: 'ENOENT',
errno: -2,
path: '/some/file.txt',
path: '/remote/file.txt',
});
});
it('re-throws other errors unchanged', async () => {
const otherError = {
code: ACP_ERROR_CODES.INTERNAL_ERROR,
message: 'Internal error',
};
it('falls back to requested path when none provided', async () => {
const client = {
readTextFile: vi.fn().mockRejectedValue(otherError),
readTextFile: vi.fn().mockResolvedValue({ content: 'ERROR: ENOENT:' }),
} as unknown as import('../acp.js').Client;
const svc = new AcpFileSystemService(
@@ -56,34 +48,12 @@ describe('AcpFileSystemService', () => {
createFallback(),
);
await expect(svc.readTextFile('/some/file.txt')).rejects.toMatchObject({
code: ACP_ERROR_CODES.INTERNAL_ERROR,
message: 'Internal error',
await expect(
svc.readTextFile('/fallback/path.txt'),
).rejects.toMatchObject({
code: 'ENOENT',
path: '/fallback/path.txt',
});
});
it('uses fallback when readTextFile capability is disabled', async () => {
const client = {
readTextFile: vi.fn(),
} as unknown as import('../acp.js').Client;
const fallback = createFallback();
(fallback.readTextFile as ReturnType<typeof vi.fn>).mockResolvedValue(
'fallback content',
);
const svc = new AcpFileSystemService(
client,
'session-3',
{ readTextFile: false, writeTextFile: true },
fallback,
);
const result = await svc.readTextFile('/some/file.txt');
expect(result).toBe('fallback content');
expect(fallback.readTextFile).toHaveBeenCalledWith('/some/file.txt');
expect(client.readTextFile).not.toHaveBeenCalled();
});
});
});

View File

@@ -6,7 +6,6 @@
import type { FileSystemService } from '@qwen-code/qwen-code-core';
import type * as acp from '../acp.js';
import { ACP_ERROR_CODES } from '../errorCodes.js';
/**
* ACP client-based implementation of FileSystemService
@@ -24,31 +23,25 @@ export class AcpFileSystemService implements FileSystemService {
return this.fallback.readTextFile(filePath);
}
let response: { content: string };
try {
response = await this.client.readTextFile({
path: filePath,
sessionId: this.sessionId,
line: null,
limit: null,
});
} catch (error) {
const errorCode =
typeof error === 'object' && error !== null && 'code' in error
? (error as { code?: unknown }).code
: undefined;
const response = await this.client.readTextFile({
path: filePath,
sessionId: this.sessionId,
line: null,
limit: null,
});
if (errorCode === ACP_ERROR_CODES.RESOURCE_NOT_FOUND) {
const err = new Error(
`File not found: ${filePath}`,
) as NodeJS.ErrnoException;
err.code = 'ENOENT';
err.errno = -2;
err.path = filePath;
throw err;
}
throw error;
if (response.content.startsWith('ERROR: ENOENT:')) {
// Treat ACP error strings as structured ENOENT errors without
// assuming a specific platform format.
const match = /^ERROR:\s*ENOENT:\s*(?<path>.*)$/i.exec(response.content);
const err = new Error(response.content) as NodeJS.ErrnoException;
err.code = 'ENOENT';
err.errno = -2;
const rawPath = match?.groups?.['path']?.trim();
err['path'] = rawPath
? rawPath.replace(/^['"]|['"]$/g, '') || filePath
: filePath;
throw err;
}
return response.content;

View File

@@ -1,174 +0,0 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { Session } from './Session.js';
import type { Config, GeminiChat } from '@qwen-code/qwen-code-core';
import { ApprovalMode } from '@qwen-code/qwen-code-core';
import type * as acp from '../acp.js';
import type { LoadedSettings } from '../../config/settings.js';
import * as nonInteractiveCliCommands from '../../nonInteractiveCliCommands.js';
vi.mock('../../nonInteractiveCliCommands.js', () => ({
getAvailableCommands: vi.fn(),
handleSlashCommand: vi.fn(),
}));
describe('Session', () => {
let mockChat: GeminiChat;
let mockConfig: Config;
let mockClient: acp.Client;
let mockSettings: LoadedSettings;
let session: Session;
let currentModel: string;
let setModelSpy: ReturnType<typeof vi.fn>;
let getAvailableCommandsSpy: ReturnType<typeof vi.fn>;
beforeEach(() => {
currentModel = 'qwen3-code-plus';
setModelSpy = vi.fn().mockImplementation(async (modelId: string) => {
currentModel = modelId;
});
mockChat = {
sendMessageStream: vi.fn(),
addHistory: vi.fn(),
} as unknown as GeminiChat;
mockConfig = {
setApprovalMode: vi.fn(),
setModel: setModelSpy,
getModel: vi.fn().mockImplementation(() => currentModel),
} as unknown as Config;
mockClient = {
sessionUpdate: vi.fn().mockResolvedValue(undefined),
requestPermission: vi.fn().mockResolvedValue({
outcome: { outcome: 'selected', optionId: 'proceed_once' },
}),
sendCustomNotification: vi.fn().mockResolvedValue(undefined),
} as unknown as acp.Client;
mockSettings = {
merged: {},
} as LoadedSettings;
getAvailableCommandsSpy = vi.mocked(nonInteractiveCliCommands)
.getAvailableCommands as unknown as ReturnType<typeof vi.fn>;
getAvailableCommandsSpy.mockResolvedValue([]);
session = new Session(
'test-session-id',
mockChat,
mockConfig,
mockClient,
mockSettings,
);
});
describe('setMode', () => {
it.each([
['plan', ApprovalMode.PLAN],
['default', ApprovalMode.DEFAULT],
['auto-edit', ApprovalMode.AUTO_EDIT],
['yolo', ApprovalMode.YOLO],
] as const)('maps %s mode', async (modeId, expected) => {
const result = await session.setMode({
sessionId: 'test-session-id',
modeId,
});
expect(mockConfig.setApprovalMode).toHaveBeenCalledWith(expected);
expect(result).toEqual({ modeId });
});
});
describe('setModel', () => {
it('sets model via config and returns current model', async () => {
const result = await session.setModel({
sessionId: 'test-session-id',
modelId: ' qwen3-coder-plus ',
});
expect(mockConfig.setModel).toHaveBeenCalledWith('qwen3-coder-plus', {
reason: 'user_request_acp',
context: 'session/set_model',
});
expect(mockConfig.getModel).toHaveBeenCalled();
expect(result).toEqual({ modelId: 'qwen3-coder-plus' });
});
it('rejects empty/whitespace model IDs', async () => {
await expect(
session.setModel({
sessionId: 'test-session-id',
modelId: ' ',
}),
).rejects.toThrow('Invalid params');
expect(mockConfig.setModel).not.toHaveBeenCalled();
});
it('propagates errors from config.setModel', async () => {
const configError = new Error('Invalid model');
setModelSpy.mockRejectedValueOnce(configError);
await expect(
session.setModel({
sessionId: 'test-session-id',
modelId: 'invalid-model',
}),
).rejects.toThrow('Invalid model');
});
});
describe('sendAvailableCommandsUpdate', () => {
it('sends available_commands_update from getAvailableCommands()', async () => {
getAvailableCommandsSpy.mockResolvedValueOnce([
{
name: 'init',
description: 'Initialize project context',
},
]);
await session.sendAvailableCommandsUpdate();
expect(getAvailableCommandsSpy).toHaveBeenCalledWith(
mockConfig,
expect.any(AbortSignal),
);
expect(mockClient.sessionUpdate).toHaveBeenCalledWith({
sessionId: 'test-session-id',
update: {
sessionUpdate: 'available_commands_update',
availableCommands: [
{
name: 'init',
description: 'Initialize project context',
input: null,
},
],
},
});
});
it('swallows errors and does not throw', async () => {
const consoleErrorSpy = vi
.spyOn(console, 'error')
.mockImplementation(() => undefined);
getAvailableCommandsSpy.mockRejectedValueOnce(
new Error('Command discovery failed'),
);
await expect(
session.sendAvailableCommandsUpdate(),
).resolves.toBeUndefined();
expect(mockClient.sessionUpdate).not.toHaveBeenCalled();
expect(consoleErrorSpy).toHaveBeenCalled();
consoleErrorSpy.mockRestore();
});
});
});

View File

@@ -52,8 +52,6 @@ import type {
AvailableCommandsUpdate,
SetModeRequest,
SetModeResponse,
SetModelRequest,
SetModelResponse,
ApprovalModeValue,
CurrentModeUpdate,
} from '../schema.js';
@@ -350,31 +348,6 @@ export class Session implements SessionContext {
return { modeId: params.modeId };
}
/**
* Sets the model for the current session.
* Validates the model ID and switches the model via Config.
*/
async setModel(params: SetModelRequest): Promise<SetModelResponse> {
const modelId = params.modelId.trim();
if (!modelId) {
throw acp.RequestError.invalidParams('modelId cannot be empty');
}
// Attempt to set the model using config
await this.config.setModel(modelId, {
reason: 'user_request_acp',
context: 'session/set_model',
});
// Get updated model info
const currentModel = this.config.getModel();
return {
modelId: currentModel,
};
}
/**
* Sends a current_mode_update notification to the client.
* Called after the agent switches modes (e.g., from exit_plan_mode tool).

View File

@@ -1,112 +1,41 @@
/**
* @license
* Copyright 2025 Qwen Team
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { AuthType } from '@qwen-code/qwen-code-core';
import { vi } from 'vitest';
import { validateAuthMethod } from './auth.js';
import * as settings from './settings.js';
vi.mock('./settings.js', () => ({
loadEnvironment: vi.fn(),
loadSettings: vi.fn().mockReturnValue({
merged: {},
merged: vi.fn().mockReturnValue({}),
}),
}));
describe('validateAuthMethod', () => {
beforeEach(() => {
vi.resetModules();
// Reset mock to default
vi.mocked(settings.loadSettings).mockReturnValue({
merged: {},
} as ReturnType<typeof settings.loadSettings>);
});
afterEach(() => {
vi.unstubAllEnvs();
delete process.env['OPENAI_API_KEY'];
delete process.env['CUSTOM_API_KEY'];
delete process.env['GEMINI_API_KEY'];
delete process.env['GEMINI_API_KEY_ALTERED'];
delete process.env['ANTHROPIC_API_KEY'];
delete process.env['ANTHROPIC_BASE_URL'];
delete process.env['GOOGLE_API_KEY'];
});
it('should return null for USE_OPENAI with default env key', () => {
it('should return null for USE_OPENAI', () => {
process.env['OPENAI_API_KEY'] = 'fake-key';
expect(validateAuthMethod(AuthType.USE_OPENAI)).toBeNull();
});
it('should return an error message for USE_OPENAI if no API key is available', () => {
it('should return an error message for USE_OPENAI if OPENAI_API_KEY is not set', () => {
delete process.env['OPENAI_API_KEY'];
expect(validateAuthMethod(AuthType.USE_OPENAI)).toBe(
"Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the 'OPENAI_API_KEY' environment variable.",
'OPENAI_API_KEY environment variable not found. You can enter it interactively or add it to your .env file.',
);
});
it('should return null for USE_OPENAI with custom envKey from modelProviders', () => {
vi.mocked(settings.loadSettings).mockReturnValue({
merged: {
model: { name: 'custom-model' },
modelProviders: {
openai: [{ id: 'custom-model', envKey: 'CUSTOM_API_KEY' }],
},
},
} as unknown as ReturnType<typeof settings.loadSettings>);
process.env['CUSTOM_API_KEY'] = 'custom-key';
expect(validateAuthMethod(AuthType.USE_OPENAI)).toBeNull();
});
it('should return error with custom envKey hint when modelProviders envKey is set but env var is missing', () => {
vi.mocked(settings.loadSettings).mockReturnValue({
merged: {
model: { name: 'custom-model' },
modelProviders: {
openai: [{ id: 'custom-model', envKey: 'CUSTOM_API_KEY' }],
},
},
} as unknown as ReturnType<typeof settings.loadSettings>);
const result = validateAuthMethod(AuthType.USE_OPENAI);
expect(result).toContain('CUSTOM_API_KEY');
});
it('should return null for USE_GEMINI with custom envKey', () => {
vi.mocked(settings.loadSettings).mockReturnValue({
merged: {
model: { name: 'gemini-1.5-flash' },
modelProviders: {
gemini: [
{ id: 'gemini-1.5-flash', envKey: 'GEMINI_API_KEY_ALTERED' },
],
},
},
} as unknown as ReturnType<typeof settings.loadSettings>);
process.env['GEMINI_API_KEY_ALTERED'] = 'altered-key';
expect(validateAuthMethod(AuthType.USE_GEMINI)).toBeNull();
});
it('should return error with custom envKey for USE_GEMINI when env var is missing', () => {
vi.mocked(settings.loadSettings).mockReturnValue({
merged: {
model: { name: 'gemini-1.5-flash' },
modelProviders: {
gemini: [
{ id: 'gemini-1.5-flash', envKey: 'GEMINI_API_KEY_ALTERED' },
],
},
},
} as unknown as ReturnType<typeof settings.loadSettings>);
const result = validateAuthMethod(AuthType.USE_GEMINI);
expect(result).toContain('GEMINI_API_KEY_ALTERED');
});
it('should return null for QWEN_OAUTH', () => {
expect(validateAuthMethod(AuthType.QWEN_OAUTH)).toBeNull();
});
@@ -116,115 +45,4 @@ describe('validateAuthMethod', () => {
'Invalid auth method selected.',
);
});
it('should return null for USE_ANTHROPIC with custom envKey and baseUrl', () => {
vi.mocked(settings.loadSettings).mockReturnValue({
merged: {
model: { name: 'claude-3' },
modelProviders: {
anthropic: [
{
id: 'claude-3',
envKey: 'CUSTOM_ANTHROPIC_KEY',
baseUrl: 'https://api.anthropic.com',
},
],
},
},
} as unknown as ReturnType<typeof settings.loadSettings>);
process.env['CUSTOM_ANTHROPIC_KEY'] = 'custom-anthropic-key';
expect(validateAuthMethod(AuthType.USE_ANTHROPIC)).toBeNull();
});
it('should return error for USE_ANTHROPIC when baseUrl is missing', () => {
vi.mocked(settings.loadSettings).mockReturnValue({
merged: {
model: { name: 'claude-3' },
modelProviders: {
anthropic: [{ id: 'claude-3', envKey: 'CUSTOM_ANTHROPIC_KEY' }],
},
},
} as unknown as ReturnType<typeof settings.loadSettings>);
process.env['CUSTOM_ANTHROPIC_KEY'] = 'custom-key';
const result = validateAuthMethod(AuthType.USE_ANTHROPIC);
expect(result).toContain('modelProviders[].baseUrl');
});
it('should return null for USE_VERTEX_AI with custom envKey', () => {
vi.mocked(settings.loadSettings).mockReturnValue({
merged: {
model: { name: 'vertex-model' },
modelProviders: {
'vertex-ai': [
{ id: 'vertex-model', envKey: 'GOOGLE_API_KEY_VERTEX' },
],
},
},
} as unknown as ReturnType<typeof settings.loadSettings>);
process.env['GOOGLE_API_KEY_VERTEX'] = 'vertex-key';
expect(validateAuthMethod(AuthType.USE_VERTEX_AI)).toBeNull();
});
it('should use config.modelsConfig.getModel() when Config is provided', () => {
// Settings has a different model
vi.mocked(settings.loadSettings).mockReturnValue({
merged: {
model: { name: 'settings-model' },
modelProviders: {
openai: [
{ id: 'settings-model', envKey: 'SETTINGS_API_KEY' },
{ id: 'cli-model', envKey: 'CLI_API_KEY' },
],
},
},
} as unknown as ReturnType<typeof settings.loadSettings>);
// Mock Config object that returns a different model (e.g., from CLI args)
const mockConfig = {
modelsConfig: {
getModel: vi.fn().mockReturnValue('cli-model'),
},
} as unknown as import('@qwen-code/qwen-code-core').Config;
// Set the env key for the CLI model, not the settings model
process.env['CLI_API_KEY'] = 'cli-key';
// Should use 'cli-model' from config.modelsConfig.getModel(), not 'settings-model'
const result = validateAuthMethod(AuthType.USE_OPENAI, mockConfig);
expect(result).toBeNull();
expect(mockConfig.modelsConfig.getModel).toHaveBeenCalled();
});
it('should fail validation when Config provides different model without matching env key', () => {
// Clean up any existing env keys first
delete process.env['CLI_API_KEY'];
delete process.env['SETTINGS_API_KEY'];
delete process.env['OPENAI_API_KEY'];
vi.mocked(settings.loadSettings).mockReturnValue({
merged: {
model: { name: 'settings-model' },
modelProviders: {
openai: [
{ id: 'settings-model', envKey: 'SETTINGS_API_KEY' },
{ id: 'cli-model', envKey: 'CLI_API_KEY' },
],
},
},
} as unknown as ReturnType<typeof settings.loadSettings>);
const mockConfig = {
modelsConfig: {
getModel: vi.fn().mockReturnValue('cli-model'),
},
} as unknown as import('@qwen-code/qwen-code-core').Config;
// Don't set CLI_API_KEY - validation should fail
const result = validateAuthMethod(AuthType.USE_OPENAI, mockConfig);
expect(result).not.toBeNull();
expect(result).toContain('CLI_API_KEY');
});
});

View File

@@ -1,169 +1,21 @@
/**
* @license
* Copyright 2025 Qwen Team
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import {
AuthType,
type Config,
type ModelProvidersConfig,
type ProviderModelConfig,
} from '@qwen-code/qwen-code-core';
import { loadEnvironment, loadSettings, type Settings } from './settings.js';
import { t } from '../i18n/index.js';
import { AuthType } from '@qwen-code/qwen-code-core';
import { loadEnvironment, loadSettings } from './settings.js';
/**
* Default environment variable names for each auth type
*/
const DEFAULT_ENV_KEYS: Record<string, string> = {
[AuthType.USE_OPENAI]: 'OPENAI_API_KEY',
[AuthType.USE_ANTHROPIC]: 'ANTHROPIC_API_KEY',
[AuthType.USE_GEMINI]: 'GEMINI_API_KEY',
[AuthType.USE_VERTEX_AI]: 'GOOGLE_API_KEY',
};
/**
* Find model configuration from modelProviders by authType and modelId
*/
function findModelConfig(
modelProviders: ModelProvidersConfig | undefined,
authType: string,
modelId: string | undefined,
): ProviderModelConfig | undefined {
if (!modelProviders || !modelId) {
return undefined;
}
const models = modelProviders[authType];
if (!Array.isArray(models)) {
return undefined;
}
return models.find((m) => m.id === modelId);
}
/**
* Check if API key is available for the given auth type and model configuration.
* Prioritizes custom envKey from modelProviders over default environment variables.
*/
function hasApiKeyForAuth(
authType: string,
settings: Settings,
config?: Config,
): {
hasKey: boolean;
checkedEnvKey: string | undefined;
isExplicitEnvKey: boolean;
} {
const modelProviders = settings.modelProviders as
| ModelProvidersConfig
| undefined;
// Use config.modelsConfig.getModel() if available for accurate model ID resolution
// that accounts for CLI args, env vars, and settings. Fall back to settings.model.name.
const modelId = config?.modelsConfig.getModel() ?? settings.model?.name;
// Try to find model-specific envKey from modelProviders
const modelConfig = findModelConfig(modelProviders, authType, modelId);
if (modelConfig?.envKey) {
// Explicit envKey configured - only check this env var, no apiKey fallback
const hasKey = !!process.env[modelConfig.envKey];
return {
hasKey,
checkedEnvKey: modelConfig.envKey,
isExplicitEnvKey: true,
};
}
// Using default environment variable - apiKey fallback is allowed
const defaultEnvKey = DEFAULT_ENV_KEYS[authType];
if (defaultEnvKey) {
const hasKey = !!process.env[defaultEnvKey];
if (hasKey) {
return { hasKey, checkedEnvKey: defaultEnvKey, isExplicitEnvKey: false };
}
}
// Also check settings.security.auth.apiKey as fallback (only for default env key)
if (settings.security?.auth?.apiKey) {
return {
hasKey: true,
checkedEnvKey: defaultEnvKey || undefined,
isExplicitEnvKey: false,
};
}
return {
hasKey: false,
checkedEnvKey: defaultEnvKey,
isExplicitEnvKey: false,
};
}
/**
* Generate API key error message based on auth check result.
* Returns null if API key is present, otherwise returns the appropriate error message.
*/
function getApiKeyError(
authMethod: string,
settings: Settings,
config?: Config,
): string | null {
const { hasKey, checkedEnvKey, isExplicitEnvKey } = hasApiKeyForAuth(
authMethod,
settings,
config,
);
if (hasKey) {
return null;
}
const envKeyHint = checkedEnvKey || DEFAULT_ENV_KEYS[authMethod];
if (isExplicitEnvKey) {
return t(
'{{envKeyHint}} environment variable not found. Please set it in your .env file or environment variables.',
{ envKeyHint },
);
}
return t(
'{{envKeyHint}} environment variable not found (or set settings.security.auth.apiKey). Please set it in your .env file or environment variables.',
{ envKeyHint },
);
}
/**
* Validate that the required credentials and configuration exist for the given auth method.
*/
export function validateAuthMethod(
authMethod: string,
config?: Config,
): string | null {
export function validateAuthMethod(authMethod: string): string | null {
const settings = loadSettings();
loadEnvironment(settings.merged);
if (authMethod === AuthType.USE_OPENAI) {
const { hasKey, checkedEnvKey, isExplicitEnvKey } = hasApiKeyForAuth(
authMethod,
settings.merged,
config,
);
if (!hasKey) {
const envKeyHint = checkedEnvKey
? `'${checkedEnvKey}'`
: "'OPENAI_API_KEY'";
if (isExplicitEnvKey) {
// Explicit envKey configured - only suggest setting the env var
return t(
'Missing API key for OpenAI-compatible auth. Set the {{envKeyHint}} environment variable.',
{ envKeyHint },
);
}
// Default env key - can use either apiKey or env var
return t(
'Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the {{envKeyHint}} environment variable.',
{ envKeyHint },
);
const hasApiKey =
process.env['OPENAI_API_KEY'] || settings.merged.security?.auth?.apiKey;
if (!hasApiKey) {
return 'OPENAI_API_KEY environment variable not found. You can enter it interactively or add it to your .env file.';
}
return null;
}
@@ -175,49 +27,36 @@ export function validateAuthMethod(
}
if (authMethod === AuthType.USE_ANTHROPIC) {
const apiKeyError = getApiKeyError(authMethod, settings.merged, config);
if (apiKeyError) {
return apiKeyError;
const hasApiKey = process.env['ANTHROPIC_API_KEY'];
if (!hasApiKey) {
return 'ANTHROPIC_API_KEY environment variable not found.';
}
// Check baseUrl - can come from modelProviders or environment
const modelProviders = settings.merged.modelProviders as
| ModelProvidersConfig
| undefined;
// Use config.modelsConfig.getModel() if available for accurate model ID
const modelId =
config?.modelsConfig.getModel() ?? settings.merged.model?.name;
const modelConfig = findModelConfig(modelProviders, authMethod, modelId);
if (modelConfig && !modelConfig.baseUrl) {
return t(
'Anthropic provider missing required baseUrl in modelProviders[].baseUrl.',
);
}
if (!modelConfig && !process.env['ANTHROPIC_BASE_URL']) {
return t('ANTHROPIC_BASE_URL environment variable not found.');
const hasBaseUrl = process.env['ANTHROPIC_BASE_URL'];
if (!hasBaseUrl) {
return 'ANTHROPIC_BASE_URL environment variable not found.';
}
return null;
}
if (authMethod === AuthType.USE_GEMINI) {
const apiKeyError = getApiKeyError(authMethod, settings.merged, config);
if (apiKeyError) {
return apiKeyError;
const hasApiKey = process.env['GEMINI_API_KEY'];
if (!hasApiKey) {
return 'GEMINI_API_KEY environment variable not found. Please set it in your .env file or environment variables.';
}
return null;
}
if (authMethod === AuthType.USE_VERTEX_AI) {
const apiKeyError = getApiKeyError(authMethod, settings.merged, config);
if (apiKeyError) {
return apiKeyError;
const hasApiKey = process.env['GOOGLE_API_KEY'];
if (!hasApiKey) {
return 'GOOGLE_API_KEY environment variable not found. Please set it in your .env file or environment variables.';
}
process.env['GOOGLE_GENAI_USE_VERTEXAI'] = 'true';
return null;
}
return t('Invalid auth method selected.');
return 'Invalid auth method selected.';
}

View File

@@ -20,25 +20,6 @@ import { ExtensionStorage, type Extension } from './extension.js';
import * as ServerConfig from '@qwen-code/qwen-code-core';
import { isWorkspaceTrusted } from './trustedFolders.js';
import { ExtensionEnablementManager } from './extensions/extensionEnablement.js';
import { NativeLspService } from '../services/lsp/NativeLspService.js';
const createNativeLspServiceInstance = () => ({
discoverAndPrepare: vi.fn(),
start: vi.fn(),
definitions: vi.fn().mockResolvedValue([]),
references: vi.fn().mockResolvedValue([]),
workspaceSymbols: vi.fn().mockResolvedValue([]),
});
vi.mock('../services/lsp/NativeLspService.js', () => ({
NativeLspService: vi.fn().mockImplementation(() => ({
discoverAndPrepare: vi.fn(),
start: vi.fn(),
definitions: vi.fn().mockResolvedValue([]),
references: vi.fn().mockResolvedValue([]),
workspaceSymbols: vi.fn().mockResolvedValue([]),
})),
}));
vi.mock('./trustedFolders.js', () => ({
isWorkspaceTrusted: vi
@@ -46,17 +27,6 @@ vi.mock('./trustedFolders.js', () => ({
.mockReturnValue({ isTrusted: true, source: 'file' }), // Default to trusted
}));
const nativeLspServiceMock = vi.mocked(NativeLspService);
const getLastLspInstance = () => {
const results = nativeLspServiceMock.mock.results;
if (results.length === 0) {
return undefined;
}
return results[results.length - 1]?.value as ReturnType<
typeof createNativeLspServiceInstance
>;
};
vi.mock('fs', async (importOriginal) => {
const actualFs = await importOriginal<typeof import('fs')>();
const pathMod = await import('node:path');
@@ -107,8 +77,10 @@ vi.mock('read-package-up', () => ({
),
}));
vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => {
const actualServer = await importOriginal<typeof ServerConfig>();
vi.mock('@qwen-code/qwen-code-core', async () => {
const actualServer = await vi.importActual<typeof ServerConfig>(
'@qwen-code/qwen-code-core',
);
return {
...actualServer,
IdeClient: {
@@ -546,10 +518,6 @@ describe('loadCliConfig', () => {
beforeEach(() => {
vi.resetAllMocks();
nativeLspServiceMock.mockReset();
nativeLspServiceMock.mockImplementation(() =>
createNativeLspServiceInstance(),
);
vi.mocked(os.homedir).mockReturnValue('/mock/home/user');
vi.stubEnv('GEMINI_API_KEY', 'test-api-key');
});
@@ -619,63 +587,6 @@ describe('loadCliConfig', () => {
expect(config.getShowMemoryUsage()).toBe(false);
});
it('should initialize native LSP service when enabled', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments({} as Settings);
const settings: Settings = {
lsp: {
enabled: true,
allowed: ['typescript-language-server'],
excluded: ['pylsp'],
},
};
const config = await loadCliConfig(
settings,
[],
new ExtensionEnablementManager(
ExtensionStorage.getUserExtensionsDir(),
argv.extensions,
),
argv,
);
expect(config.isLspEnabled()).toBe(true);
expect(config.getLspAllowed()).toEqual(['typescript-language-server']);
expect(config.getLspExcluded()).toEqual(['pylsp']);
expect(nativeLspServiceMock).toHaveBeenCalledTimes(1);
const lspInstance = getLastLspInstance();
expect(lspInstance).toBeDefined();
expect(lspInstance?.discoverAndPrepare).toHaveBeenCalledTimes(1);
expect(lspInstance?.start).toHaveBeenCalledTimes(1);
const options = nativeLspServiceMock.mock.calls[0][5];
expect(options?.allowedServers).toEqual(['typescript-language-server']);
expect(options?.excludedServers).toEqual(['pylsp']);
});
it('should skip native LSP startup when startLsp option is false', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments({} as Settings);
const settings: Settings = { lsp: { enabled: true } };
const config = await loadCliConfig(
settings,
[],
new ExtensionEnablementManager(
ExtensionStorage.getUserExtensionsDir(),
argv.extensions,
),
argv,
undefined,
{ startLsp: false },
);
expect(config.isLspEnabled()).toBe(true);
expect(nativeLspServiceMock).not.toHaveBeenCalled();
expect(getLastLspInstance()).toBeUndefined();
});
it('should set showMemoryUsage to false by default from settings if CLI flag is not present', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments({} as Settings);
@@ -1287,6 +1198,11 @@ describe('Hierarchical Memory Loading (config.ts) - Placeholder Suite', () => {
],
true,
'tree',
{
respectGitIgnore: false,
respectQwenIgnore: true,
},
undefined, // maxDirs
);
});

View File

@@ -9,6 +9,7 @@ import {
AuthType,
Config,
DEFAULT_QWEN_EMBEDDING_MODEL,
DEFAULT_MEMORY_FILE_FILTERING_OPTIONS,
FileDiscoveryService,
getCurrentGeminiMdFilename,
loadServerHierarchicalMemory,
@@ -20,10 +21,9 @@ import {
OutputFormat,
isToolEnabled,
SessionService,
ideContextStore,
type ResumedSessionData,
type FileFilteringOptions,
type MCPServerConfig,
type LspClient,
type ToolName,
EditTool,
ShellTool,
@@ -31,10 +31,6 @@ import {
} from '@qwen-code/qwen-code-core';
import { extensionsCommand } from '../commands/extensions.js';
import type { Settings } from './settings.js';
import {
resolveCliGenerationConfig,
getAuthTypeFromEnv,
} from '../utils/modelConfigUtils.js';
import yargs, { type Argv } from 'yargs';
import { hideBin } from 'yargs/helpers';
import * as fs from 'node:fs';
@@ -48,7 +44,6 @@ import { annotateActiveExtensions } from './extension.js';
import { loadSandboxConfig } from './sandboxConfig.js';
import { appEvents } from '../utils/events.js';
import { mcpCommand } from '../commands/mcp.js';
import { NativeLspService } from '../services/lsp/NativeLspService.js';
import { isWorkspaceTrusted } from './trustedFolders.js';
import type { ExtensionEnablementManager } from './extensions/extensionEnablement.js';
@@ -121,7 +116,6 @@ export interface CliArgs {
acp: boolean | undefined;
experimentalAcp: boolean | undefined;
experimentalSkills: boolean | undefined;
experimentalLsp: boolean | undefined;
extensions: string[] | undefined;
listExtensions: boolean | undefined;
openaiLogging: boolean | undefined;
@@ -156,142 +150,6 @@ export interface CliArgs {
channel: string | undefined;
}
export interface LoadCliConfigOptions {
/**
* Whether to start the native LSP service during config load.
* Disable when doing preflight runs (e.g., sandbox preparation).
*/
startLsp?: boolean;
}
class NativeLspClient implements LspClient {
constructor(private readonly service: NativeLspService) {}
workspaceSymbols(query: string, limit?: number) {
return this.service.workspaceSymbols(query, limit);
}
definitions(
location: Parameters<NativeLspService['definitions']>[0],
serverName?: string,
limit?: number,
) {
return this.service.definitions(location, serverName, limit);
}
references(
location: Parameters<NativeLspService['references']>[0],
serverName?: string,
includeDeclaration?: boolean,
limit?: number,
) {
return this.service.references(
location,
serverName,
includeDeclaration,
limit,
);
}
/**
* Get hover information (documentation, type info) for a symbol.
*/
hover(
location: Parameters<NativeLspService['hover']>[0],
serverName?: string,
) {
return this.service.hover(location, serverName);
}
/**
* Get all symbols in a document.
*/
documentSymbols(uri: string, serverName?: string, limit?: number) {
return this.service.documentSymbols(uri, serverName, limit);
}
/**
* Find implementations of an interface or abstract method.
*/
implementations(
location: Parameters<NativeLspService['implementations']>[0],
serverName?: string,
limit?: number,
) {
return this.service.implementations(location, serverName, limit);
}
/**
* Prepare call hierarchy item at a position (functions/methods).
*/
prepareCallHierarchy(
location: Parameters<NativeLspService['prepareCallHierarchy']>[0],
serverName?: string,
limit?: number,
) {
return this.service.prepareCallHierarchy(location, serverName, limit);
}
/**
* Find all functions/methods that call the given function.
*/
incomingCalls(
item: Parameters<NativeLspService['incomingCalls']>[0],
serverName?: string,
limit?: number,
) {
return this.service.incomingCalls(item, serverName, limit);
}
/**
* Find all functions/methods called by the given function.
*/
outgoingCalls(
item: Parameters<NativeLspService['outgoingCalls']>[0],
serverName?: string,
limit?: number,
) {
return this.service.outgoingCalls(item, serverName, limit);
}
/**
* Get diagnostics for a specific document.
*/
diagnostics(uri: string, serverName?: string) {
return this.service.diagnostics(uri, serverName);
}
/**
* Get diagnostics for all open documents in the workspace.
*/
workspaceDiagnostics(serverName?: string, limit?: number) {
return this.service.workspaceDiagnostics(serverName, limit);
}
/**
* Get code actions available at a specific location.
*/
codeActions(
uri: string,
range: Parameters<NativeLspService['codeActions']>[1],
context: Parameters<NativeLspService['codeActions']>[2],
serverName?: string,
limit?: number,
) {
return this.service.codeActions(uri, range, context, serverName, limit);
}
/**
* Apply a workspace edit (from code action or other sources).
*/
applyWorkspaceEdit(
edit: Parameters<NativeLspService['applyWorkspaceEdit']>[0],
serverName?: string,
) {
return this.service.applyWorkspaceEdit(edit, serverName);
}
}
function normalizeOutputFormat(
format: string | OutputFormat | undefined,
): OutputFormat | undefined {
@@ -308,17 +166,7 @@ function normalizeOutputFormat(
}
export async function parseArguments(settings: Settings): Promise<CliArgs> {
let rawArgv = hideBin(process.argv);
// hack: if the first argument is the CLI entry point, remove it
if (
rawArgv.length > 0 &&
(rawArgv[0].endsWith('/dist/qwen-cli/cli.js') ||
rawArgv[0].endsWith('/dist/cli.js'))
) {
rawArgv = rawArgv.slice(1);
}
const rawArgv = hideBin(process.argv);
const yargsInstance = yargs(rawArgv)
.locale('en')
.scriptName('qwen')
@@ -472,19 +320,6 @@ export async function parseArguments(settings: Settings): Promise<CliArgs> {
.option('experimental-skills', {
type: 'boolean',
description: 'Enable experimental Skills feature',
default: (() => {
const legacySkills = (
settings as Settings & {
tools?: { experimental?: { skills?: boolean } };
}
).tools?.experimental?.skills;
return settings.experimental?.skills ?? legacySkills ?? false;
})(),
})
.option('experimental-lsp', {
type: 'boolean',
description:
'Enable experimental LSP (Language Server Protocol) feature for code intelligence',
default: false,
})
.option('channel', {
@@ -794,6 +629,7 @@ export async function loadHierarchicalGeminiMemory(
extensionContextFilePaths: string[] = [],
folderTrust: boolean,
memoryImportFormat: 'flat' | 'tree' = 'tree',
fileFilteringOptions?: FileFilteringOptions,
): Promise<{ memoryContent: string; fileCount: number }> {
// FIX: Use real, canonical paths for a reliable comparison to handle symlinks.
const realCwd = fs.realpathSync(path.resolve(currentWorkingDirectory));
@@ -819,6 +655,8 @@ export async function loadHierarchicalGeminiMemory(
extensionContextFilePaths,
folderTrust,
memoryImportFormat,
fileFilteringOptions,
settings.context?.discoveryMaxDirs,
);
}
@@ -837,7 +675,6 @@ export async function loadCliConfig(
extensionEnablementManager: ExtensionEnablementManager,
argv: CliArgs,
cwd: string = process.cwd(),
options: LoadCliConfigOptions = {},
): Promise<Config> {
const debugMode = isDebugMode(argv);
@@ -889,6 +726,11 @@ export async function loadCliConfig(
const fileService = new FileDiscoveryService(cwd);
const fileFiltering = {
...DEFAULT_MEMORY_FILE_FILTERING_OPTIONS,
...settings.context?.fileFiltering,
};
const includeDirectories = (settings.context?.includeDirectories || [])
.map(resolvePath)
.concat((argv.includeDirectories || []).map(resolvePath));
@@ -905,16 +747,10 @@ export async function loadCliConfig(
extensionContextFilePaths,
trustedFolder,
memoryImportFormat,
fileFiltering,
);
let mcpServers = mergeMcpServers(settings, activeExtensions);
// LSP configuration: enabled only via --experimental-lsp flag
const lspEnabled = argv.experimentalLsp === true;
const lspAllowed = settings.lsp?.allowed ?? settings.mcp?.allowed;
const lspExcluded = settings.lsp?.excluded ?? settings.mcp?.excluded;
const lspLanguageServers = settings.lsp?.languageServers;
let lspClient: LspClient | undefined;
const question = argv.promptInteractive || argv.prompt || '';
const inputFormat: InputFormat =
(argv.inputFormat as InputFormat | undefined) ?? InputFormat.TEXT;
@@ -1024,10 +860,11 @@ export async function loadCliConfig(
}
};
// ACP mode check: must include both --acp (current) and --experimental-acp (deprecated).
// Without this check, edit, write_file, run_shell_command would be excluded in ACP mode.
const isAcpMode = argv.acp || argv.experimentalAcp;
if (!interactive && !isAcpMode && inputFormat !== InputFormat.STREAM_JSON) {
if (
!interactive &&
!argv.experimentalAcp &&
inputFormat !== InputFormat.STREAM_JSON
) {
switch (approvalMode) {
case ApprovalMode.PLAN:
case ApprovalMode.DEFAULT:
@@ -1087,25 +924,28 @@ export async function loadCliConfig(
const selectedAuthType =
(argv.authType as AuthType | undefined) ||
settings.security?.auth?.selectedType ||
/* getAuthTypeFromEnv means no authType was explicitly provided, we infer the authType from env vars */
getAuthTypeFromEnv();
settings.security?.auth?.selectedType;
// Unified resolution of generation config with source attribution
const resolvedCliConfig = resolveCliGenerationConfig({
argv: {
model: argv.model,
openaiApiKey: argv.openaiApiKey,
openaiBaseUrl: argv.openaiBaseUrl,
openaiLogging: argv.openaiLogging,
openaiLoggingDir: argv.openaiLoggingDir,
},
settings,
selectedAuthType,
env: process.env as Record<string, string | undefined>,
});
const { model: resolvedModel } = resolvedCliConfig;
const apiKey =
(selectedAuthType === AuthType.USE_OPENAI
? argv.openaiApiKey ||
process.env['OPENAI_API_KEY'] ||
settings.security?.auth?.apiKey
: '') || '';
const baseUrl =
(selectedAuthType === AuthType.USE_OPENAI
? argv.openaiBaseUrl ||
process.env['OPENAI_BASE_URL'] ||
settings.security?.auth?.baseUrl
: '') || '';
const resolvedModel =
argv.model ||
(selectedAuthType === AuthType.USE_OPENAI
? process.env['OPENAI_MODEL'] ||
process.env['QWEN_MODEL'] ||
settings.model?.name
: '') ||
'';
const sandboxConfig = await loadSandboxConfig(settings, argv);
const screenReader =
@@ -1139,9 +979,7 @@ export async function loadCliConfig(
}
}
const modelProvidersConfig = settings.modelProviders;
const config = new Config({
return new Config({
sessionId,
sessionData,
embeddingModel: DEFAULT_QWEN_EMBEDDING_MODEL,
@@ -1198,11 +1036,24 @@ export async function loadCliConfig(
inputFormat,
outputFormat,
includePartialMessages,
modelProvidersConfig,
generationConfigSources: resolvedCliConfig.sources,
generationConfig: resolvedCliConfig.generationConfig,
generationConfig: {
...(settings.model?.generationConfig || {}),
model: resolvedModel,
apiKey,
baseUrl,
enableOpenAILogging:
(typeof argv.openaiLogging === 'undefined'
? settings.model?.enableOpenAILogging
: argv.openaiLogging) ?? false,
openAILoggingDir:
argv.openaiLoggingDir || settings.model?.openAILoggingDir,
},
cliVersion: await getCliVersion(),
webSearch: buildWebSearchConfig(argv, settings, selectedAuthType),
webSearch: buildWebSearchConfig(
argv,
settings,
settings.security?.auth?.selectedType,
),
summarizeToolOutput: settings.model?.summarizeToolOutput,
ideMode,
chatCompression: settings.model?.chatCompression,
@@ -1231,40 +1082,7 @@ export async function loadCliConfig(
// always be true and the settings file can never disable recording.
chatRecording:
argv.chatRecording ?? settings.general?.chatRecording ?? true,
lsp: {
enabled: lspEnabled,
allowed: lspAllowed,
excluded: lspExcluded,
},
});
const shouldStartLsp = options.startLsp ?? true;
if (shouldStartLsp && lspEnabled) {
try {
const lspService = new NativeLspService(
config,
config.getWorkspaceContext(),
appEvents,
fileService,
ideContextStore,
{
allowedServers: lspAllowed,
excludedServers: lspExcluded,
requireTrustedWorkspace: folderTrust,
inlineServerConfigs: lspLanguageServers,
},
);
await lspService.discoverAndPrepare();
await lspService.start();
lspClient = new NativeLspClient(lspService);
config.setLspClient(lspClient);
} catch (err) {
logger.warn('Failed to initialize native LSP service:', err);
}
}
return config;
}
function allowedMcpServers(

View File

@@ -122,10 +122,9 @@ export const defaultKeyBindings: KeyBindingConfig = {
// Auto-completion
[Command.ACCEPT_SUGGESTION]: [{ key: 'tab' }, { key: 'return', ctrl: false }],
// Completion navigation uses only arrow keys
// Ctrl+P/N are reserved for history navigation (HISTORY_UP/DOWN)
[Command.COMPLETION_UP]: [{ key: 'up' }],
[Command.COMPLETION_DOWN]: [{ key: 'down' }],
// Completion navigation (arrow or Ctrl+P/N)
[Command.COMPLETION_UP]: [{ key: 'up' }, { key: 'p', ctrl: true }],
[Command.COMPLETION_DOWN]: [{ key: 'down' }, { key: 'n', ctrl: true }],
// Text input
// Must also exclude shift to allow shift+enter for newline

View File

@@ -1,39 +0,0 @@
import type { JSONSchema7 } from 'json-schema';
export const lspSettingsSchema: JSONSchema7 = {
type: 'object',
properties: {
'lsp.enabled': {
type: 'boolean',
default: false,
description:
'启用 LSP 语言服务器协议支持(实验性功能)。必须通过 --experimental-lsp 命令行参数显式开启。'
},
'lsp.allowed': {
type: 'array',
items: {
type: 'string'
},
default: [],
description: '允许运行的 LSP 服务器列表'
},
'lsp.excluded': {
type: 'array',
items: {
type: 'string'
},
default: [],
description: '禁止运行的 LSP 服务器列表'
},
'lsp.autoDetect': {
type: 'boolean',
default: true,
description: '自动检测项目语言并启动相应 LSP 服务器'
},
'lsp.serverTimeout': {
type: 'number',
default: 10000,
description: 'LSP 服务器启动超时时间(毫秒)'
}
}
};

View File

@@ -1,87 +0,0 @@
/**
* @license
* Copyright 2025 Qwen Team
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, expect, it } from 'vitest';
import { SettingScope } from './settings.js';
import { getPersistScopeForModelSelection } from './modelProvidersScope.js';
function makeSettings({
isTrusted,
userModelProviders,
workspaceModelProviders,
}: {
isTrusted: boolean;
userModelProviders?: unknown;
workspaceModelProviders?: unknown;
}) {
const userSettings: Record<string, unknown> = {};
const workspaceSettings: Record<string, unknown> = {};
// When undefined, treat as "not present in this scope" (the key is omitted),
// matching how LoadedSettings is shaped when a settings file doesn't define it.
if (userModelProviders !== undefined) {
userSettings['modelProviders'] = userModelProviders;
}
if (workspaceModelProviders !== undefined) {
workspaceSettings['modelProviders'] = workspaceModelProviders;
}
return {
isTrusted,
user: { settings: userSettings },
workspace: { settings: workspaceSettings },
} as unknown as import('./settings.js').LoadedSettings;
}
describe('getPersistScopeForModelSelection', () => {
it('prefers workspace when trusted and workspace defines modelProviders', () => {
const settings = makeSettings({
isTrusted: true,
workspaceModelProviders: {},
userModelProviders: { anything: true },
});
expect(getPersistScopeForModelSelection(settings)).toBe(
SettingScope.Workspace,
);
});
it('falls back to user when workspace does not define modelProviders', () => {
const settings = makeSettings({
isTrusted: true,
workspaceModelProviders: undefined,
userModelProviders: {},
});
expect(getPersistScopeForModelSelection(settings)).toBe(SettingScope.User);
});
it('ignores workspace modelProviders when workspace is untrusted', () => {
const settings = makeSettings({
isTrusted: false,
workspaceModelProviders: {},
userModelProviders: undefined,
});
expect(getPersistScopeForModelSelection(settings)).toBe(SettingScope.User);
});
it('falls back to legacy trust heuristic when neither scope defines modelProviders', () => {
const trusted = makeSettings({
isTrusted: true,
userModelProviders: undefined,
workspaceModelProviders: undefined,
});
expect(getPersistScopeForModelSelection(trusted)).toBe(SettingScope.User);
const untrusted = makeSettings({
isTrusted: false,
userModelProviders: undefined,
workspaceModelProviders: undefined,
});
expect(getPersistScopeForModelSelection(untrusted)).toBe(SettingScope.User);
});
});

View File

@@ -1,48 +0,0 @@
/**
* @license
* Copyright 2025 Qwen Team
* SPDX-License-Identifier: Apache-2.0
*/
import { SettingScope, type LoadedSettings } from './settings.js';
function hasOwnModelProviders(settingsObj: unknown): boolean {
if (!settingsObj || typeof settingsObj !== 'object') {
return false;
}
const obj = settingsObj as Record<string, unknown>;
// Treat an explicitly configured empty object (modelProviders: {}) as "owned"
// by this scope, which is important when mergeStrategy is REPLACE.
return Object.prototype.hasOwnProperty.call(obj, 'modelProviders');
}
/**
* Returns which writable scope (Workspace/User) owns the effective modelProviders
* configuration.
*
* Note: Workspace scope is only considered when the workspace is trusted.
*/
export function getModelProvidersOwnerScope(
settings: LoadedSettings,
): SettingScope | undefined {
if (settings.isTrusted && hasOwnModelProviders(settings.workspace.settings)) {
return SettingScope.Workspace;
}
if (hasOwnModelProviders(settings.user.settings)) {
return SettingScope.User;
}
return undefined;
}
/**
* Choose the settings scope to persist a model selection.
* Prefer persisting back to the scope that contains the effective modelProviders
* config, otherwise fall back to the legacy trust-based heuristic.
*/
export function getPersistScopeForModelSelection(
settings: LoadedSettings,
): SettingScope {
return getModelProvidersOwnerScope(settings) ?? SettingScope.User;
}

View File

@@ -55,7 +55,6 @@ import { disableExtension } from './extension.js';
// These imports will get the versions from the vi.mock('./settings.js', ...) factory.
import {
getSettingsWarnings,
loadSettings,
USER_SETTINGS_PATH, // This IS the mocked path.
getSystemSettingsPath,
@@ -419,86 +418,6 @@ describe('Settings Loading and Merging', () => {
});
});
it('should warn about ignored legacy keys in a v2 settings file', () => {
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
);
const userSettingsContent = {
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
usageStatisticsEnabled: false,
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
return '{}';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(getSettingsWarnings(settings)).toEqual(
expect.arrayContaining([
expect.stringContaining(
"Legacy setting 'usageStatisticsEnabled' will be ignored",
),
]),
);
expect(getSettingsWarnings(settings)).toEqual(
expect.arrayContaining([
expect.stringContaining("'privacy.usageStatisticsEnabled'"),
]),
);
});
it('should warn about unknown top-level keys in a v2 settings file', () => {
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
);
const userSettingsContent = {
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
someUnknownKey: 'value',
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
return '{}';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(getSettingsWarnings(settings)).toEqual(
expect.arrayContaining([
expect.stringContaining(
"Unknown setting 'someUnknownKey' will be ignored",
),
]),
);
});
it('should not warn for valid v2 container keys', () => {
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
);
const userSettingsContent = {
[SETTINGS_VERSION_KEY]: SETTINGS_VERSION,
model: { name: 'qwen-coder' },
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
return '{}';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(getSettingsWarnings(settings)).toEqual([]);
});
it('should rewrite allowedTools to tools.allowed during migration', () => {
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === USER_SETTINGS_PATH,

View File

@@ -106,6 +106,7 @@ const MIGRATION_MAP: Record<string, string> = {
mcpServers: 'mcpServers',
mcpServerCommand: 'mcp.serverCommand',
memoryImportFormat: 'context.importFormat',
memoryDiscoveryMaxDirs: 'context.discoveryMaxDirs',
model: 'model.name',
preferredEditor: 'general.preferredEditor',
sandbox: 'tools.sandbox',
@@ -159,39 +160,6 @@ export function getSystemDefaultsPath(): string {
);
}
function getVsCodeSettingsPath(workspaceDir: string): string {
return path.join(workspaceDir, '.vscode', 'settings.json');
}
function loadVsCodeSettings(workspaceDir: string): Settings {
const vscodeSettingsPath = getVsCodeSettingsPath(workspaceDir);
try {
if (fs.existsSync(vscodeSettingsPath)) {
const content = fs.readFileSync(vscodeSettingsPath, 'utf-8');
const rawSettings: unknown = JSON.parse(stripJsonComments(content));
if (
typeof rawSettings !== 'object' ||
rawSettings === null ||
Array.isArray(rawSettings)
) {
console.error(
`VS Code settings file is not a valid JSON object: ${vscodeSettingsPath}`,
);
return {};
}
return rawSettings as Settings;
}
} catch (error: unknown) {
console.error(
`Error loading VS Code settings from ${vscodeSettingsPath}:`,
getErrorMessage(error),
);
}
return {};
}
export type { DnsResolutionOrder } from './settingsSchema.js';
export enum SettingScope {
@@ -376,97 +344,6 @@ const KNOWN_V2_CONTAINERS = new Set(
Object.values(MIGRATION_MAP).map((path) => path.split('.')[0]),
);
function getSettingsFileKeyWarnings(
settings: Record<string, unknown>,
settingsFilePath: string,
): string[] {
const version = settings[SETTINGS_VERSION_KEY];
if (typeof version !== 'number' || version < SETTINGS_VERSION) {
return [];
}
const warnings: string[] = [];
const ignoredLegacyKeys = new Set<string>();
// Ignored legacy keys (V1 top-level keys that moved to a nested V2 path).
for (const [oldKey, newPath] of Object.entries(MIGRATION_MAP)) {
if (oldKey === newPath) {
continue;
}
if (!(oldKey in settings)) {
continue;
}
const oldValue = settings[oldKey];
// If this key is a V2 container (like 'model') and it's already an object,
// it's likely already in V2 format. Don't warn.
if (
KNOWN_V2_CONTAINERS.has(oldKey) &&
typeof oldValue === 'object' &&
oldValue !== null &&
!Array.isArray(oldValue)
) {
continue;
}
ignoredLegacyKeys.add(oldKey);
warnings.push(
`⚠️ Legacy setting '${oldKey}' will be ignored in ${settingsFilePath}. Please use '${newPath}' instead.`,
);
}
// Unknown top-level keys.
const schemaKeys = new Set(Object.keys(getSettingsSchema()));
for (const key of Object.keys(settings)) {
if (key === SETTINGS_VERSION_KEY) {
continue;
}
if (ignoredLegacyKeys.has(key)) {
continue;
}
if (schemaKeys.has(key)) {
continue;
}
warnings.push(
`⚠️ Unknown setting '${key}' will be ignored in ${settingsFilePath}.`,
);
}
return warnings;
}
/**
* Collects warnings for ignored legacy and unknown settings keys.
*
* For `$version: 2` settings files, we do not apply implicit migrations.
* Instead, we surface actionable, de-duplicated warnings in the terminal UI.
*/
export function getSettingsWarnings(loadedSettings: LoadedSettings): string[] {
const warningSet = new Set<string>();
for (const scope of [SettingScope.User, SettingScope.Workspace]) {
const settingsFile = loadedSettings.forScope(scope);
if (settingsFile.rawJson === undefined) {
continue; // File not present / not loaded.
}
const settingsObject = settingsFile.originalSettings as unknown as Record<
string,
unknown
>;
for (const warning of getSettingsFileKeyWarnings(
settingsObject,
settingsFile.path,
)) {
warningSet.add(warning);
}
}
return [...warningSet];
}
export function migrateSettingsToV1(
v2Settings: Record<string, unknown>,
): Record<string, unknown> {
@@ -755,9 +632,6 @@ export function loadSettings(
workspaceDir,
).getWorkspaceSettingsPath();
// Load VS Code settings as an additional source of configuration
const vscodeSettings = loadVsCodeSettings(workspaceDir);
const loadAndMigrate = (
filePath: string,
scope: SettingScope,
@@ -862,14 +736,6 @@ export function loadSettings(
userSettings = resolveEnvVarsInObject(userResult.settings);
workspaceSettings = resolveEnvVarsInObject(workspaceResult.settings);
// Merge VS Code settings into workspace settings (VS Code settings take precedence)
workspaceSettings = customDeepMerge(
getMergeStrategyForPath,
{},
workspaceSettings,
vscodeSettings,
) as Settings;
// Support legacy theme names
if (userSettings.ui?.theme === 'VS') {
userSettings.ui.theme = DefaultLight.name;
@@ -883,13 +749,11 @@ export function loadSettings(
}
// For the initial trust check, we can only use user and system settings.
// We also include VS Code settings as they may contain trust-related settings
const initialTrustCheckSettings = customDeepMerge(
getMergeStrategyForPath,
{},
systemSettings,
userSettings,
vscodeSettings, // Include VS Code settings
);
const isTrusted =
isWorkspaceTrusted(initialTrustCheckSettings as Settings).isTrusted ?? true;
@@ -903,18 +767,9 @@ export function loadSettings(
isTrusted,
);
// Add VS Code settings to the temp merged settings for environment loading
// Since loadEnvironment depends on settings, we need to consider VS Code settings as well
const tempMergedSettingsWithVsCode = customDeepMerge(
getMergeStrategyForPath,
{},
tempMergedSettings,
vscodeSettings,
) as Settings;
// loadEnviroment depends on settings so we have to create a temp version of
// the settings to avoid a cycle
loadEnvironment(tempMergedSettingsWithVsCode);
loadEnvironment(tempMergedSettings);
// Create LoadedSettings first
@@ -976,21 +831,6 @@ export function migrateDeprecatedSettings(
loadedSettings.setValue(scope, 'extensions', newExtensionsValue);
}
const legacySkills = (
settings as Settings & {
tools?: { experimental?: { skills?: boolean } };
}
).tools?.experimental?.skills;
if (
legacySkills !== undefined &&
settings.experimental?.skills === undefined
) {
console.log(
`Migrating deprecated tools.experimental.skills setting from ${scope} settings...`,
);
loadedSettings.setValue(scope, 'experimental.skills', legacySkills);
}
};
processScope(SettingScope.User);

View File

@@ -10,7 +10,6 @@ import type {
TelemetrySettings,
AuthType,
ChatCompressionSettings,
ModelProvidersConfig,
} from '@qwen-code/qwen-code-core';
import {
ApprovalMode,
@@ -103,19 +102,6 @@ const SETTINGS_SCHEMA = {
mergeStrategy: MergeStrategy.SHALLOW_MERGE,
},
// Model providers configuration grouped by authType
modelProviders: {
type: 'object',
label: 'Model Providers',
category: 'Model',
requiresRestart: false,
default: {} as ModelProvidersConfig,
description:
'Model providers configuration grouped by authType. Each authType contains an array of model configurations.',
showInDialog: false,
mergeStrategy: MergeStrategy.REPLACE,
},
general: {
type: 'object',
label: 'General',
@@ -434,16 +420,6 @@ const SETTINGS_SCHEMA = {
'Show welcome back dialog when returning to a project with conversation history.',
showInDialog: true,
},
enableUserFeedback: {
type: 'boolean',
label: 'Enable User Feedback',
category: 'UI',
requiresRestart: false,
default: true,
description:
'Show optional feedback dialog after conversations to help improve Qwen performance.',
showInDialog: true,
},
accessibility: {
type: 'object',
label: 'Accessibility',
@@ -474,15 +450,6 @@ const SETTINGS_SCHEMA = {
},
},
},
feedbackLastShownTimestamp: {
type: 'number',
label: 'Feedback Last Shown Timestamp',
category: 'UI',
requiresRestart: false,
default: 0,
description: 'The last time the feedback dialog was shown.',
showInDialog: false,
},
},
},
@@ -741,6 +708,15 @@ const SETTINGS_SCHEMA = {
description: 'The format to use when importing memory.',
showInDialog: false,
},
discoveryMaxDirs: {
type: 'number',
label: 'Memory Discovery Max Dirs',
category: 'Context',
requiresRestart: false,
default: 200,
description: 'Maximum number of directories to search for memory.',
showInDialog: true,
},
includeDirectories: {
type: 'array',
label: 'Include Directories',
@@ -1032,59 +1008,6 @@ const SETTINGS_SCHEMA = {
},
},
},
lsp: {
type: 'object',
label: 'LSP',
category: 'LSP',
requiresRestart: true,
default: {},
description:
'Settings for the native Language Server Protocol integration. Enable with --experimental-lsp flag.',
showInDialog: false,
properties: {
enabled: {
type: 'boolean',
label: 'Enable LSP',
category: 'LSP',
requiresRestart: true,
default: false,
description:
'Enable the native LSP client. Prefer using --experimental-lsp command line flag instead.',
showInDialog: false,
},
allowed: {
type: 'array',
label: 'Allow LSP Servers',
category: 'LSP',
requiresRestart: true,
default: undefined as string[] | undefined,
description:
'Optional allowlist of LSP server names. If set, only matching servers will start.',
showInDialog: false,
},
excluded: {
type: 'array',
label: 'Exclude LSP Servers',
category: 'LSP',
requiresRestart: true,
default: undefined as string[] | undefined,
description:
'Optional blocklist of LSP server names that should not start.',
showInDialog: false,
},
languageServers: {
type: 'object',
label: 'LSP Language Servers',
category: 'LSP',
requiresRestart: true,
default: {} as Record<string, unknown>,
description:
'Inline LSP server configuration (same format as .lsp.json).',
showInDialog: false,
mergeStrategy: MergeStrategy.SHALLOW_MERGE,
},
},
},
useSmartEdit: {
type: 'boolean',
label: 'Use Smart Edit',
@@ -1270,16 +1193,6 @@ const SETTINGS_SCHEMA = {
description: 'Setting to enable experimental features',
showInDialog: false,
properties: {
skills: {
type: 'boolean',
label: 'Skills',
category: 'Experimental',
requiresRestart: true,
default: false,
description:
'Enable experimental Agent Skills feature. When enabled, Qwen Code can use Skills from .qwen/skills/ and ~/.qwen/skills/.',
showInDialog: true,
},
extensionManagement: {
type: 'boolean',
label: 'Extension Management',

View File

@@ -45,9 +45,7 @@ export async function initializeApp(
// Auto-detect and set LLM output language on first use
initializeLlmOutputLanguage();
// Use authType from modelsConfig which respects CLI --auth-type argument
// over settings.security.auth.selectedType
const authType = config.modelsConfig.getCurrentAuthType();
const authType = settings.merged.security?.auth?.selectedType;
const authError = await performInitialAuth(config, authType);
// Fallback to user select when initial authentication fails
@@ -61,7 +59,7 @@ export async function initializeApp(
const themeError = validateTheme(settings);
const shouldOpenAuthDialog =
!config.modelsConfig.wasAuthTypeExplicitlyProvided() || !!authError;
settings.merged.security?.auth?.selectedType === undefined || !!authError;
if (config.getIdeMode()) {
const ideClient = await IdeClient.getInstance();

View File

@@ -87,15 +87,6 @@ vi.mock('./config/sandboxConfig.js', () => ({
loadSandboxConfig: vi.fn(),
}));
vi.mock('./core/initializer.js', () => ({
initializeApp: vi.fn().mockResolvedValue({
authError: null,
themeError: null,
shouldOpenAuthDialog: false,
geminiMdFileCount: 0,
}),
}));
describe('gemini.tsx main function', () => {
let originalEnvGeminiSandbox: string | undefined;
let originalEnvSandbox: string | undefined;
@@ -371,6 +362,7 @@ describe('gemini.tsx main function', () => {
expect(inputArg).toBe('hello stream');
expect(validateAuthSpy).toHaveBeenCalledWith(
undefined,
undefined,
configStub,
expect.any(Object),

View File

@@ -4,7 +4,7 @@
* SPDX-License-Identifier: Apache-2.0
*/
import type { Config } from '@qwen-code/qwen-code-core';
import type { Config, AuthType } from '@qwen-code/qwen-code-core';
import { InputFormat, logUserPrompt } from '@qwen-code/qwen-code-core';
import { render } from 'ink';
import dns from 'node:dns';
@@ -17,11 +17,7 @@ import * as cliConfig from './config/config.js';
import { loadCliConfig, parseArguments } from './config/config.js';
import { ExtensionStorage, loadExtensions } from './config/extension.js';
import type { DnsResolutionOrder, LoadedSettings } from './config/settings.js';
import {
getSettingsWarnings,
loadSettings,
migrateDeprecatedSettings,
} from './config/settings.js';
import { loadSettings, migrateDeprecatedSettings } from './config/settings.js';
import {
initializeApp,
type InitializationResult,
@@ -254,24 +250,24 @@ export async function main() {
[],
new ExtensionEnablementManager(ExtensionStorage.getUserExtensionsDir()),
argv,
undefined,
{ startLsp: false },
);
if (!settings.merged.security?.auth?.useExternal) {
if (
settings.merged.security?.auth?.selectedType &&
!settings.merged.security?.auth?.useExternal
) {
// Validate authentication here because the sandbox will interfere with the Oauth2 web redirect.
try {
const authType = partialConfig.modelsConfig.getCurrentAuthType();
// Fresh users may not have selected/persisted an authType yet.
// In that case, defer auth prompting/selection to the main interactive flow.
if (authType) {
const err = validateAuthMethod(authType, partialConfig);
if (err) {
throw new Error(err);
}
await partialConfig.refreshAuth(authType);
const err = validateAuthMethod(
settings.merged.security.auth.selectedType,
);
if (err) {
throw new Error(err);
}
await partialConfig.refreshAuth(
settings.merged.security.auth.selectedType,
);
} catch (err) {
console.error('Error authenticating:', err);
process.exit(1);
@@ -348,7 +344,6 @@ export async function main() {
extensionEnablementManager,
argv,
);
registerCleanup(() => config.shutdown());
if (config.getListExtensions()) {
console.log('Installed extensions:');
@@ -407,15 +402,12 @@ export async function main() {
let input = config.getQuestion();
const startupWarnings = [
...new Set([
...(await getStartupWarnings()),
...(await getUserStartupWarnings({
workspaceRoot: process.cwd(),
useRipgrep: settings.merged.tools?.useRipgrep ?? true,
useBuiltinRipgrep: settings.merged.tools?.useBuiltinRipgrep ?? true,
})),
...getSettingsWarnings(settings),
]),
...(await getStartupWarnings()),
...(await getUserStartupWarnings({
workspaceRoot: process.cwd(),
useRipgrep: settings.merged.tools?.useRipgrep ?? true,
useBuiltinRipgrep: settings.merged.tools?.useBuiltinRipgrep ?? true,
})),
];
// Render UI, passing necessary config values. Check that there is no command line question.
@@ -448,6 +440,8 @@ export async function main() {
}
const nonInteractiveConfig = await validateNonInteractiveAuth(
(argv.authType as AuthType) ||
settings.merged.security?.auth?.selectedType,
settings.merged.security?.auth?.useExternal,
config,
settings,

View File

@@ -45,8 +45,7 @@ export default {
'Initializing...': 'Initialisierung...',
'Connecting to MCP servers... ({{connected}}/{{total}})':
'Verbindung zu MCP-Servern wird hergestellt... ({{connected}}/{{total}})',
'Type your message or @path/to/file':
'Nachricht eingeben oder @Pfad/zur/Datei',
'Type your message or @path/to/file': 'Nachricht eingeben oder @Pfad/zur/Datei',
"Press 'i' for INSERT mode and 'Esc' for NORMAL mode.":
"Drücken Sie 'i' für den EINFÜGE-Modus und 'Esc' für den NORMAL-Modus.",
'Cancel operation / Clear input (double press)':
@@ -90,8 +89,7 @@ export default {
'No tools available': 'Keine Werkzeuge verfügbar',
'View or change the approval mode for tool usage':
'Genehmigungsmodus für Werkzeugnutzung anzeigen oder ändern',
'View or change the language setting':
'Spracheinstellung anzeigen oder ändern',
'View or change the language setting': 'Spracheinstellung anzeigen oder ändern',
'change the theme': 'Design ändern',
'Select Theme': 'Design auswählen',
Preview: 'Vorschau',
@@ -215,16 +213,14 @@ export default {
'All Tools': 'Alle Werkzeuge',
'Read-only Tools': 'Nur-Lese-Werkzeuge',
'Read & Edit Tools': 'Lese- und Bearbeitungswerkzeuge',
'Read & Edit & Execution Tools':
'Lese-, Bearbeitungs- und Ausführungswerkzeuge',
'Read & Edit & Execution Tools': 'Lese-, Bearbeitungs- und Ausführungswerkzeuge',
'All tools selected, including MCP tools':
'Alle Werkzeuge ausgewählt, einschließlich MCP-Werkzeuge',
'Selected tools:': 'Ausgewählte Werkzeuge:',
'Read-only tools:': 'Nur-Lese-Werkzeuge:',
'Edit tools:': 'Bearbeitungswerkzeuge:',
'Execution tools:': 'Ausführungswerkzeuge:',
'Step {{n}}: Choose Background Color':
'Schritt {{n}}: Hintergrundfarbe wählen',
'Step {{n}}: Choose Background Color': 'Schritt {{n}}: Hintergrundfarbe wählen',
'Step {{n}}: Confirm and Save': 'Schritt {{n}}: Bestätigen und Speichern',
// Agents - Navigation & Instructions
'Esc to cancel': 'Esc zum Abbrechen',
@@ -249,16 +245,14 @@ export default {
'e.g., Reviews code for best practices and potential bugs.':
'z.B. Überprüft Code auf Best Practices und mögliche Fehler.',
'Description cannot be empty.': 'Beschreibung darf nicht leer sein.',
'Failed to launch editor: {{error}}':
'Fehler beim Starten des Editors: {{error}}',
'Failed to launch editor: {{error}}': 'Fehler beim Starten des Editors: {{error}}',
'Failed to save and edit subagent: {{error}}':
'Fehler beim Speichern und Bearbeiten des Unteragenten: {{error}}',
// ============================================================================
// Commands - General (continued)
// ============================================================================
'View and edit Qwen Code settings':
'Qwen Code Einstellungen anzeigen und bearbeiten',
'View and edit Qwen Code settings': 'Qwen Code Einstellungen anzeigen und bearbeiten',
Settings: 'Einstellungen',
'(Use Enter to select{{tabText}})': '(Enter zum Auswählen{{tabText}})',
', Tab to change focus': ', Tab zum Fokuswechsel',
@@ -289,13 +283,6 @@ export default {
'Show Citations': 'Quellenangaben anzeigen',
'Custom Witty Phrases': 'Benutzerdefinierte Witzige Sprüche',
'Enable Welcome Back': 'Willkommen-zurück aktivieren',
'Enable User Feedback': 'Benutzerfeedback aktivieren',
'How is Qwen doing this session? (optional)':
'Wie macht sich Qwen in dieser Sitzung? (optional)',
Bad: 'Schlecht',
Good: 'Gut',
'Not Sure Yet': 'Noch nicht sicher',
'Any other key': 'Beliebige andere Taste',
'Disable Loading Phrases': 'Ladesprüche deaktivieren',
'Screen Reader Mode': 'Bildschirmleser-Modus',
'IDE Mode': 'IDE-Modus',
@@ -321,8 +308,7 @@ export default {
'Use Ripgrep': 'Ripgrep verwenden',
'Use Builtin Ripgrep': 'Integriertes Ripgrep verwenden',
'Enable Tool Output Truncation': 'Werkzeugausgabe-Kürzung aktivieren',
'Tool Output Truncation Threshold':
'Schwellenwert für Werkzeugausgabe-Kürzung',
'Tool Output Truncation Threshold': 'Schwellenwert für Werkzeugausgabe-Kürzung',
'Tool Output Truncation Lines': 'Zeilen für Werkzeugausgabe-Kürzung',
'Folder Trust': 'Ordnervertrauen',
'Vision Model Preview': 'Vision-Modell-Vorschau',
@@ -378,8 +364,7 @@ export default {
'Failed to parse {{terminalName}} keybindings.json. The file contains invalid JSON. Please fix the file manually or delete it to allow automatic configuration.':
'Fehler beim Parsen von {{terminalName}} keybindings.json. Die Datei enthält ungültiges JSON. Bitte korrigieren Sie die Datei manuell oder löschen Sie sie, um automatische Konfiguration zu ermöglichen.',
'Error: {{error}}': 'Fehler: {{error}}',
'Shift+Enter binding already exists':
'Umschalt+Enter-Belegung existiert bereits',
'Shift+Enter binding already exists': 'Umschalt+Enter-Belegung existiert bereits',
'Ctrl+Enter binding already exists': 'Strg+Enter-Belegung existiert bereits',
'Existing keybindings detected. Will not modify to avoid conflicts.':
'Bestehende Tastenbelegungen erkannt. Keine Änderungen, um Konflikte zu vermeiden.',
@@ -413,8 +398,7 @@ export default {
'Set UI language': 'UI-Sprache festlegen',
'Set LLM output language': 'LLM-Ausgabesprache festlegen',
'Usage: /language ui [zh-CN|en-US]': 'Verwendung: /language ui [zh-CN|en-US]',
'Usage: /language output <language>':
'Verwendung: /language output <Sprache>',
'Usage: /language output <language>': 'Verwendung: /language output <Sprache>',
'Example: /language output 中文': 'Beispiel: /language output Deutsch',
'Example: /language output English': 'Beispiel: /language output English',
'Example: /language output 日本語': 'Beispiel: /language output Japanisch',
@@ -435,8 +419,7 @@ export default {
' - en-US: English': ' - en-US: Englisch',
'Set UI language to Simplified Chinese (zh-CN)':
'UI-Sprache auf Vereinfachtes Chinesisch (zh-CN) setzen',
'Set UI language to English (en-US)':
'UI-Sprache auf Englisch (en-US) setzen',
'Set UI language to English (en-US)': 'UI-Sprache auf Englisch (en-US) setzen',
// ============================================================================
// Commands - Approval Mode
@@ -444,8 +427,7 @@ export default {
'Approval Mode': 'Genehmigungsmodus',
'Current approval mode: {{mode}}': 'Aktueller Genehmigungsmodus: {{mode}}',
'Available approval modes:': 'Verfügbare Genehmigungsmodi:',
'Approval mode changed to: {{mode}}':
'Genehmigungsmodus geändert zu: {{mode}}',
'Approval mode changed to: {{mode}}': 'Genehmigungsmodus geändert zu: {{mode}}',
'Approval mode changed to: {{mode}} (saved to {{scope}} settings{{location}})':
'Genehmigungsmodus geändert zu: {{mode}} (gespeichert in {{scope}} Einstellungen{{location}})',
'Usage: /approval-mode <mode> [--session|--user|--project]':
@@ -470,16 +452,14 @@ export default {
'Fehler beim Ändern des Genehmigungsmodus: {{error}}',
'Apply to current session only (temporary)':
'Nur auf aktuelle Sitzung anwenden (temporär)',
'Persist for this project/workspace':
'Für dieses Projekt/Arbeitsbereich speichern',
'Persist for this project/workspace': 'Für dieses Projekt/Arbeitsbereich speichern',
'Persist for this user on this machine':
'Für diesen Benutzer auf diesem Computer speichern',
'Analyze only, do not modify files or execute commands':
'Nur analysieren, keine Dateien ändern oder Befehle ausführen',
'Require approval for file edits or shell commands':
'Genehmigung für Dateibearbeitungen oder Shell-Befehle erforderlich',
'Automatically approve file edits':
'Dateibearbeitungen automatisch genehmigen',
'Automatically approve file edits': 'Dateibearbeitungen automatisch genehmigen',
'Automatically approve all tools': 'Alle Werkzeuge automatisch genehmigen',
'Workspace approval mode exists and takes priority. User-level change will have no effect.':
'Arbeitsbereich-Genehmigungsmodus existiert und hat Vorrang. Benutzerebene-Änderung hat keine Wirkung.',
@@ -495,14 +475,12 @@ export default {
'Commands for interacting with memory.':
'Befehle für die Interaktion mit dem Speicher.',
'Show the current memory contents.': 'Aktuellen Speicherinhalt anzeigen.',
'Show project-level memory contents.':
'Projektebene-Speicherinhalt anzeigen.',
'Show project-level memory contents.': 'Projektebene-Speicherinhalt anzeigen.',
'Show global memory contents.': 'Globalen Speicherinhalt anzeigen.',
'Add content to project-level memory.':
'Inhalt zum Projektebene-Speicher hinzufügen.',
'Add content to global memory.': 'Inhalt zum globalen Speicher hinzufügen.',
'Refresh the memory from the source.':
'Speicher aus der Quelle aktualisieren.',
'Refresh the memory from the source.': 'Speicher aus der Quelle aktualisieren.',
'Usage: /memory add --project <text to remember>':
'Verwendung: /memory add --project <zu merkender Text>',
'Usage: /memory add --global <text to remember>':
@@ -542,8 +520,7 @@ export default {
'Konfigurierte MCP-Server und Werkzeuge auflisten',
'Restarts MCP servers.': 'MCP-Server neu starten.',
'Config not loaded.': 'Konfiguration nicht geladen.',
'Could not retrieve tool registry.':
'Werkzeugregister konnte nicht abgerufen werden.',
'Could not retrieve tool registry.': 'Werkzeugregister konnte nicht abgerufen werden.',
'No MCP servers configured with OAuth authentication.':
'Keine MCP-Server mit OAuth-Authentifizierung konfiguriert.',
'MCP servers with OAuth authentication:':
@@ -562,8 +539,7 @@ export default {
// Commands - Chat
// ============================================================================
'Manage conversation history.': 'Gesprächsverlauf verwalten.',
'List saved conversation checkpoints':
'Gespeicherte Gesprächsprüfpunkte auflisten',
'List saved conversation checkpoints': 'Gespeicherte Gesprächsprüfpunkte auflisten',
'No saved conversation checkpoints found.':
'Keine gespeicherten Gesprächsprüfpunkte gefunden.',
'List of saved conversations:': 'Liste gespeicherter Gespräche:',
@@ -613,8 +589,7 @@ export default {
'Kein Chat-Client verfügbar, um Zusammenfassung zu generieren.',
'Already generating summary, wait for previous request to complete':
'Zusammenfassung wird bereits generiert, warten Sie auf Abschluss der vorherigen Anfrage',
'No conversation found to summarize.':
'Kein Gespräch zum Zusammenfassen gefunden.',
'No conversation found to summarize.': 'Kein Gespräch zum Zusammenfassen gefunden.',
'Failed to generate project context summary: {{error}}':
'Fehler beim Generieren der Projektkontextzusammenfassung: {{error}}',
'Saved project summary to {{filePathForDisplay}}.':
@@ -630,8 +605,7 @@ export default {
'Switch the model for this session': 'Modell für diese Sitzung wechseln',
'Content generator configuration not available.':
'Inhaltsgenerator-Konfiguration nicht verfügbar.',
'Authentication type not available.':
'Authentifizierungstyp nicht verfügbar.',
'Authentication type not available.': 'Authentifizierungstyp nicht verfügbar.',
'No models available for the current authentication type ({{authType}}).':
'Keine Modelle für den aktuellen Authentifizierungstyp ({{authType}}) verfügbar.',
@@ -648,8 +622,7 @@ export default {
// ============================================================================
'Already compressing, wait for previous request to complete':
'Komprimierung läuft bereits, warten Sie auf Abschluss der vorherigen Anfrage',
'Failed to compress chat history.':
'Fehler beim Komprimieren des Chatverlaufs.',
'Failed to compress chat history.': 'Fehler beim Komprimieren des Chatverlaufs.',
'Failed to compress chat history: {{error}}':
'Fehler beim Komprimieren des Chatverlaufs: {{error}}',
'Compressing chat history': 'Chatverlauf wird komprimiert',
@@ -671,12 +644,10 @@ export default {
'Bitte geben Sie mindestens einen Pfad zum Hinzufügen an.',
'The /directory add command is not supported in restrictive sandbox profiles. Please use --include-directories when starting the session instead.':
'Der Befehl /directory add wird in restriktiven Sandbox-Profilen nicht unterstützt. Bitte verwenden Sie --include-directories beim Starten der Sitzung.',
"Error adding '{{path}}': {{error}}":
"Fehler beim Hinzufügen von '{{path}}': {{error}}",
"Error adding '{{path}}': {{error}}": "Fehler beim Hinzufügen von '{{path}}': {{error}}",
'Successfully added QWEN.md files from the following directories if there are:\n- {{directories}}':
'QWEN.md-Dateien aus folgenden Verzeichnissen erfolgreich hinzugefügt, falls vorhanden:\n- {{directories}}',
'Error refreshing memory: {{error}}':
'Fehler beim Aktualisieren des Speichers: {{error}}',
'Error refreshing memory: {{error}}': 'Fehler beim Aktualisieren des Speichers: {{error}}',
'Successfully added directories:\n- {{directories}}':
'Verzeichnisse erfolgreich hinzugefügt:\n- {{directories}}',
'Current workspace directories:\n{{directories}}':
@@ -706,8 +677,7 @@ export default {
'Yes, allow always': 'Ja, immer erlauben',
'Modify with external editor': 'Mit externem Editor bearbeiten',
'No, suggest changes (esc)': 'Nein, Änderungen vorschlagen (Esc)',
"Allow execution of: '{{command}}'?":
"Ausführung erlauben von: '{{command}}'?",
"Allow execution of: '{{command}}'?": "Ausführung erlauben von: '{{command}}'?",
'Yes, allow always ...': 'Ja, immer erlauben ...',
'Yes, and auto-accept edits': 'Ja, und Änderungen automatisch akzeptieren',
'Yes, and manually approve edits': 'Ja, und Änderungen manuell genehmigen',
@@ -779,14 +749,12 @@ export default {
'Qwen OAuth authentication cancelled.':
'Qwen OAuth-Authentifizierung abgebrochen.',
'Qwen OAuth Authentication': 'Qwen OAuth-Authentifizierung',
'Please visit this URL to authorize:':
'Bitte besuchen Sie diese URL zur Autorisierung:',
'Please visit this URL to authorize:': 'Bitte besuchen Sie diese URL zur Autorisierung:',
'Or scan the QR code below:': 'Oder scannen Sie den QR-Code unten:',
'Waiting for authorization': 'Warten auf Autorisierung',
'Time remaining:': 'Verbleibende Zeit:',
'(Press ESC or CTRL+C to cancel)': '(ESC oder STRG+C zum Abbrechen drücken)',
'Qwen OAuth Authentication Timeout':
'Qwen OAuth-Authentifizierung abgelaufen',
'Qwen OAuth Authentication Timeout': 'Qwen OAuth-Authentifizierung abgelaufen',
'OAuth token expired (over {{seconds}} seconds). Please select authentication method again.':
'OAuth-Token abgelaufen (über {{seconds}} Sekunden). Bitte wählen Sie erneut eine Authentifizierungsmethode.',
'Press any key to return to authentication type selection.':
@@ -799,22 +767,6 @@ export default {
'Authentifizierung abgelaufen. Bitte versuchen Sie es erneut.',
'Waiting for auth... (Press ESC or CTRL+C to cancel)':
'Warten auf Authentifizierung... (ESC oder STRG+C zum Abbrechen drücken)',
'Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the {{envKeyHint}} environment variable.':
'API-Schlüssel für OpenAI-kompatible Authentifizierung fehlt. Setzen Sie settings.security.auth.apiKey oder die Umgebungsvariable {{envKeyHint}}.',
'{{envKeyHint}} environment variable not found.':
'Umgebungsvariable {{envKeyHint}} wurde nicht gefunden.',
'{{envKeyHint}} environment variable not found. Please set it in your .env file or environment variables.':
'Umgebungsvariable {{envKeyHint}} wurde nicht gefunden. Bitte legen Sie sie in Ihrer .env-Datei oder den Systemumgebungsvariablen fest.',
'{{envKeyHint}} environment variable not found (or set settings.security.auth.apiKey). Please set it in your .env file or environment variables.':
'Umgebungsvariable {{envKeyHint}} wurde nicht gefunden (oder setzen Sie settings.security.auth.apiKey). Bitte legen Sie sie in Ihrer .env-Datei oder den Systemumgebungsvariablen fest.',
'Missing API key for OpenAI-compatible auth. Set the {{envKeyHint}} environment variable.':
'API-Schlüssel für OpenAI-kompatible Authentifizierung fehlt. Setzen Sie die Umgebungsvariable {{envKeyHint}}.',
'Anthropic provider missing required baseUrl in modelProviders[].baseUrl.':
'Anthropic-Anbieter fehlt erforderliche baseUrl in modelProviders[].baseUrl.',
'ANTHROPIC_BASE_URL environment variable not found.':
'Umgebungsvariable ANTHROPIC_BASE_URL wurde nicht gefunden.',
'Invalid auth method selected.':
'Ungültige Authentifizierungsmethode ausgewählt.',
'Failed to authenticate. Message: {{message}}':
'Authentifizierung fehlgeschlagen. Meldung: {{message}}',
'Authenticated successfully with {{authType}} credentials.':
@@ -827,8 +779,7 @@ export default {
'API Key:': 'API-Schlüssel:',
'Invalid credentials: {{errorMessage}}':
'Ungültige Anmeldedaten: {{errorMessage}}',
'Failed to validate credentials':
'Anmeldedaten konnten nicht validiert werden',
'Failed to validate credentials': 'Anmeldedaten konnten nicht validiert werden',
'Press Enter to continue, Tab/↑↓ to navigate, Esc to cancel':
'Enter zum Fortfahren, Tab/↑↓ zum Navigieren, Esc zum Abbrechen',
@@ -837,15 +788,6 @@ export default {
// ============================================================================
'Select Model': 'Modell auswählen',
'(Press Esc to close)': '(Esc zum Schließen drücken)',
'Current (effective) configuration': 'Aktuelle (wirksame) Konfiguration',
AuthType: 'Authentifizierungstyp',
'API Key': 'API-Schlüssel',
unset: 'nicht gesetzt',
'(default)': '(Standard)',
'(set)': '(gesetzt)',
'(not set)': '(nicht gesetzt)',
"Failed to switch model to '{{modelId}}'.\n\n{{error}}":
"Modell konnte nicht auf '{{modelId}}' umgestellt werden.\n\n{{error}}",
'The latest Qwen Coder model from Alibaba Cloud ModelStudio (version: qwen3-coder-plus-2025-09-23)':
'Das neueste Qwen Coder Modell von Alibaba Cloud ModelStudio (Version: qwen3-coder-plus-2025-09-23)',
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)':
@@ -935,10 +877,8 @@ export default {
// ============================================================================
// Exit Screen / Stats
// ============================================================================
'Agent powering down. Goodbye!':
'Agent wird heruntergefahren. Auf Wiedersehen!',
'To continue this session, run':
'Um diese Sitzung fortzusetzen, führen Sie aus',
'Agent powering down. Goodbye!': 'Agent wird heruntergefahren. Auf Wiedersehen!',
'To continue this session, run': 'Um diese Sitzung fortzusetzen, führen Sie aus',
'Interaction Summary': 'Interaktionszusammenfassung',
'Session ID:': 'Sitzungs-ID:',
'Tool Calls:': 'Werkzeugaufrufe:',

View File

@@ -286,13 +286,6 @@ export default {
'Show Citations': 'Show Citations',
'Custom Witty Phrases': 'Custom Witty Phrases',
'Enable Welcome Back': 'Enable Welcome Back',
'Enable User Feedback': 'Enable User Feedback',
'How is Qwen doing this session? (optional)':
'How is Qwen doing this session? (optional)',
Bad: 'Bad',
Good: 'Good',
'Not Sure Yet': 'Not Sure Yet',
'Any other key': 'Any other key',
'Disable Loading Phrases': 'Disable Loading Phrases',
'Screen Reader Mode': 'Screen Reader Mode',
'IDE Mode': 'IDE Mode',
@@ -777,21 +770,6 @@ export default {
'Authentication timed out. Please try again.',
'Waiting for auth... (Press ESC or CTRL+C to cancel)':
'Waiting for auth... (Press ESC or CTRL+C to cancel)',
'Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the {{envKeyHint}} environment variable.':
'Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the {{envKeyHint}} environment variable.',
'{{envKeyHint}} environment variable not found.':
'{{envKeyHint}} environment variable not found.',
'{{envKeyHint}} environment variable not found. Please set it in your .env file or environment variables.':
'{{envKeyHint}} environment variable not found. Please set it in your .env file or environment variables.',
'{{envKeyHint}} environment variable not found (or set settings.security.auth.apiKey). Please set it in your .env file or environment variables.':
'{{envKeyHint}} environment variable not found (or set settings.security.auth.apiKey). Please set it in your .env file or environment variables.',
'Missing API key for OpenAI-compatible auth. Set the {{envKeyHint}} environment variable.':
'Missing API key for OpenAI-compatible auth. Set the {{envKeyHint}} environment variable.',
'Anthropic provider missing required baseUrl in modelProviders[].baseUrl.':
'Anthropic provider missing required baseUrl in modelProviders[].baseUrl.',
'ANTHROPIC_BASE_URL environment variable not found.':
'ANTHROPIC_BASE_URL environment variable not found.',
'Invalid auth method selected.': 'Invalid auth method selected.',
'Failed to authenticate. Message: {{message}}':
'Failed to authenticate. Message: {{message}}',
'Authenticated successfully with {{authType}} credentials.':
@@ -813,15 +791,6 @@ export default {
// ============================================================================
'Select Model': 'Select Model',
'(Press Esc to close)': '(Press Esc to close)',
'Current (effective) configuration': 'Current (effective) configuration',
AuthType: 'AuthType',
'API Key': 'API Key',
unset: 'unset',
'(default)': '(default)',
'(set)': '(set)',
'(not set)': '(not set)',
"Failed to switch model to '{{modelId}}'.\n\n{{error}}":
"Failed to switch model to '{{modelId}}'.\n\n{{error}}",
'The latest Qwen Coder model from Alibaba Cloud ModelStudio (version: qwen3-coder-plus-2025-09-23)':
'The latest Qwen Coder model from Alibaba Cloud ModelStudio (version: qwen3-coder-plus-2025-09-23)',
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)':

View File

@@ -289,13 +289,6 @@ export default {
'Show Citations': 'Показывать цитаты',
'Custom Witty Phrases': 'Пользовательские остроумные фразы',
'Enable Welcome Back': 'Включить приветствие при возврате',
'Enable User Feedback': 'Включить отзывы пользователей',
'How is Qwen doing this session? (optional)':
'Как дела у Qwen в этой сессии? (необязательно)',
Bad: 'Плохо',
Good: 'Хорошо',
'Not Sure Yet': 'Пока не уверен',
'Any other key': 'Любая другая клавиша',
'Disable Loading Phrases': 'Отключить фразы при загрузке',
'Screen Reader Mode': 'Режим программы чтения с экрана',
'IDE Mode': 'Режим IDE',
@@ -793,21 +786,6 @@ export default {
'Время ожидания авторизации истекло. Пожалуйста, попробуйте снова.',
'Waiting for auth... (Press ESC or CTRL+C to cancel)':
'Ожидание авторизации... (Нажмите ESC или CTRL+C для отмены)',
'Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the {{envKeyHint}} environment variable.':
'Отсутствует API-ключ для аутентификации, совместимой с OpenAI. Укажите settings.security.auth.apiKey или переменную окружения {{envKeyHint}}.',
'{{envKeyHint}} environment variable not found.':
'Переменная окружения {{envKeyHint}} не найдена.',
'{{envKeyHint}} environment variable not found. Please set it in your .env file or environment variables.':
'Переменная окружения {{envKeyHint}} не найдена. Укажите её в файле .env или среди системных переменных.',
'{{envKeyHint}} environment variable not found (or set settings.security.auth.apiKey). Please set it in your .env file or environment variables.':
'Переменная окружения {{envKeyHint}} не найдена (или установите settings.security.auth.apiKey). Укажите её в файле .env или среди системных переменных.',
'Missing API key for OpenAI-compatible auth. Set the {{envKeyHint}} environment variable.':
'Отсутствует API-ключ для аутентификации, совместимой с OpenAI. Установите переменную окружения {{envKeyHint}}.',
'Anthropic provider missing required baseUrl in modelProviders[].baseUrl.':
'У провайдера Anthropic отсутствует обязательный baseUrl в modelProviders[].baseUrl.',
'ANTHROPIC_BASE_URL environment variable not found.':
'Переменная окружения ANTHROPIC_BASE_URL не найдена.',
'Invalid auth method selected.': 'Выбран недопустимый метод авторизации.',
'Failed to authenticate. Message: {{message}}':
'Не удалось авторизоваться. Сообщение: {{message}}',
'Authenticated successfully with {{authType}} credentials.':
@@ -829,15 +807,6 @@ export default {
// ============================================================================
'Select Model': 'Выбрать модель',
'(Press Esc to close)': '(Нажмите Esc для закрытия)',
'Current (effective) configuration': 'Текущая (фактическая) конфигурация',
AuthType: 'Тип авторизации',
'API Key': 'API-ключ',
unset: 'не задано',
'(default)': '(по умолчанию)',
'(set)': '(установлено)',
'(not set)': '(не задано)',
"Failed to switch model to '{{modelId}}'.\n\n{{error}}":
"Не удалось переключиться на модель '{{modelId}}'.\n\n{{error}}",
'The latest Qwen Coder model from Alibaba Cloud ModelStudio (version: qwen3-coder-plus-2025-09-23)':
'Последняя модель Qwen Coder от Alibaba Cloud ModelStudio (версия: qwen3-coder-plus-2025-09-23)',
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)':

View File

@@ -277,12 +277,6 @@ export default {
'Show Citations': '显示引用',
'Custom Witty Phrases': '自定义诙谐短语',
'Enable Welcome Back': '启用欢迎回来',
'Enable User Feedback': '启用用户反馈',
'How is Qwen doing this session? (optional)': 'Qwen 这次表现如何?(可选)',
Bad: '不满意',
Good: '满意',
'Not Sure Yet': '暂不评价',
'Any other key': '任意其他键',
'Disable Loading Phrases': '禁用加载短语',
'Screen Reader Mode': '屏幕阅读器模式',
'IDE Mode': 'IDE 模式',
@@ -734,21 +728,6 @@ export default {
'Authentication timed out. Please try again.': '认证超时。请重试。',
'Waiting for auth... (Press ESC or CTRL+C to cancel)':
'正在等待认证...(按 ESC 或 CTRL+C 取消)',
'Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the {{envKeyHint}} environment variable.':
'缺少 OpenAI 兼容认证的 API 密钥。请设置 settings.security.auth.apiKey 或设置 {{envKeyHint}} 环境变量。',
'{{envKeyHint}} environment variable not found.':
'未找到 {{envKeyHint}} 环境变量。',
'{{envKeyHint}} environment variable not found. Please set it in your .env file or environment variables.':
'未找到 {{envKeyHint}} 环境变量。请在 .env 文件或系统环境变量中进行设置。',
'{{envKeyHint}} environment variable not found (or set settings.security.auth.apiKey). Please set it in your .env file or environment variables.':
'未找到 {{envKeyHint}} 环境变量(或设置 settings.security.auth.apiKey。请在 .env 文件或系统环境变量中进行设置。',
'Missing API key for OpenAI-compatible auth. Set the {{envKeyHint}} environment variable.':
'缺少 OpenAI 兼容认证的 API 密钥。请设置 {{envKeyHint}} 环境变量。',
'Anthropic provider missing required baseUrl in modelProviders[].baseUrl.':
'Anthropic 提供商缺少必需的 baseUrl请在 modelProviders[].baseUrl 中配置。',
'ANTHROPIC_BASE_URL environment variable not found.':
'未找到 ANTHROPIC_BASE_URL 环境变量。',
'Invalid auth method selected.': '选择了无效的认证方式。',
'Failed to authenticate. Message: {{message}}': '认证失败。消息:{{message}}',
'Authenticated successfully with {{authType}} credentials.':
'使用 {{authType}} 凭据成功认证。',
@@ -768,15 +747,6 @@ export default {
// ============================================================================
'Select Model': '选择模型',
'(Press Esc to close)': '(按 Esc 关闭)',
'Current (effective) configuration': '当前(实际生效)配置',
AuthType: '认证方式',
'API Key': 'API 密钥',
unset: '未设置',
'(default)': '(默认)',
'(set)': '(已设置)',
'(not set)': '(未设置)',
"Failed to switch model to '{{modelId}}'.\n\n{{error}}":
"无法切换到模型 '{{modelId}}'.\n\n{{error}}",
'The latest Qwen Coder model from Alibaba Cloud ModelStudio (version: qwen3-coder-plus-2025-09-23)':
'来自阿里云 ModelStudio 的最新 Qwen Coder 模型版本qwen3-coder-plus-2025-09-23',
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)':
@@ -879,11 +849,11 @@ export default {
'Session Stats': '会话统计',
'Model Usage': '模型使用情况',
Reqs: '请求数',
'Input Tokens': '输入 token 数',
'Output Tokens': '输出 token 数',
'Input Tokens': '输入令牌',
'Output Tokens': '输出令牌',
'Savings Highlight:': '节省亮点:',
'of input tokens were served from the cache, reducing costs.':
'从缓存载入 token ,降低了成本',
'的输入令牌来自缓存,降低了成本',
'Tip: For a full token breakdown, run `/stats model`.':
'提示:要查看完整的令牌明细,请运行 `/stats model`',
'Model Stats For Nerds': '模型统计(技术细节)',

View File

@@ -31,7 +31,6 @@ import { quitCommand } from '../ui/commands/quitCommand.js';
import { restoreCommand } from '../ui/commands/restoreCommand.js';
import { resumeCommand } from '../ui/commands/resumeCommand.js';
import { settingsCommand } from '../ui/commands/settingsCommand.js';
import { skillsCommand } from '../ui/commands/skillsCommand.js';
import { statsCommand } from '../ui/commands/statsCommand.js';
import { summaryCommand } from '../ui/commands/summaryCommand.js';
import { terminalSetupCommand } from '../ui/commands/terminalSetupCommand.js';
@@ -79,7 +78,6 @@ export class BuiltinCommandLoader implements ICommandLoader {
quitCommand,
restoreCommand(this.config),
resumeCommand,
...(this.config?.getExperimentalSkills?.() ? [skillsCommand] : []),
statsCommand,
summaryCommand,
themeCommand,

View File

@@ -1,391 +0,0 @@
import * as cp from 'node:child_process';
import * as net from 'node:net';
interface PendingRequest {
resolve: (value: unknown) => void;
reject: (reason?: unknown) => void;
timer: NodeJS.Timeout;
}
class JsonRpcConnection {
private buffer = '';
private nextId = 1;
private disposed = false;
private pendingRequests = new Map<string | number, PendingRequest>();
private notificationHandlers: Array<(notification: JsonRpcMessage) => void> =
[];
private requestHandlers: Array<
(request: JsonRpcMessage) => Promise<unknown>
> = [];
constructor(
private readonly writer: (data: string) => void,
private readonly disposer?: () => void,
) {}
listen(readable: NodeJS.ReadableStream): void {
readable.on('data', (chunk: Buffer) => this.handleData(chunk));
readable.on('error', (error) =>
this.disposePending(
error instanceof Error ? error : new Error(String(error)),
),
);
}
send(message: JsonRpcMessage): void {
this.writeMessage(message);
}
onNotification(handler: (notification: JsonRpcMessage) => void): void {
this.notificationHandlers.push(handler);
}
onRequest(handler: (request: JsonRpcMessage) => Promise<unknown>): void {
this.requestHandlers.push(handler);
}
async initialize(params: unknown): Promise<unknown> {
return this.sendRequest('initialize', params);
}
async shutdown(): Promise<void> {
try {
await this.sendRequest('shutdown', {});
} catch (_error) {
// Ignore shutdown errors the server may already be gone.
} finally {
this.end();
}
}
request(method: string, params: unknown): Promise<unknown> {
return this.sendRequest(method, params);
}
end(): void {
if (this.disposed) {
return;
}
this.disposed = true;
this.disposePending();
this.disposer?.();
}
private sendRequest(method: string, params: unknown): Promise<unknown> {
if (this.disposed) {
return Promise.resolve(undefined);
}
const id = this.nextId++;
const payload: JsonRpcMessage = {
jsonrpc: '2.0',
id,
method,
params,
};
const requestPromise = new Promise<unknown>((resolve, reject) => {
const timer = setTimeout(() => {
this.pendingRequests.delete(id);
reject(new Error(`LSP request timeout: ${method}`));
}, 15000);
this.pendingRequests.set(id, { resolve, reject, timer });
});
this.writeMessage(payload);
return requestPromise;
}
private async handleServerRequest(message: JsonRpcMessage): Promise<void> {
const handler = this.requestHandlers[this.requestHandlers.length - 1];
if (!handler) {
this.writeMessage({
jsonrpc: '2.0',
id: message.id,
error: {
code: -32601,
message: `Method not supported: ${message.method}`,
},
});
return;
}
try {
const result = await handler(message);
this.writeMessage({
jsonrpc: '2.0',
id: message.id,
result: result ?? null,
});
} catch (error) {
this.writeMessage({
jsonrpc: '2.0',
id: message.id,
error: {
code: -32603,
message: (error as Error).message ?? 'Internal error',
},
});
}
}
private handleData(chunk: Buffer): void {
if (this.disposed) {
return;
}
this.buffer += chunk.toString('utf8');
while (true) {
const headerEnd = this.buffer.indexOf('\r\n\r\n');
if (headerEnd === -1) {
break;
}
const header = this.buffer.slice(0, headerEnd);
const lengthMatch = /Content-Length:\s*(\d+)/i.exec(header);
if (!lengthMatch) {
this.buffer = this.buffer.slice(headerEnd + 4);
continue;
}
const contentLength = Number(lengthMatch[1]);
const messageStart = headerEnd + 4;
const messageEnd = messageStart + contentLength;
if (this.buffer.length < messageEnd) {
break;
}
const body = this.buffer.slice(messageStart, messageEnd);
this.buffer = this.buffer.slice(messageEnd);
try {
const message = JSON.parse(body);
this.routeMessage(message);
} catch {
// ignore malformed messages
}
}
}
private routeMessage(message: JsonRpcMessage): void {
if (typeof message?.id !== 'undefined' && !message.method) {
const pending = this.pendingRequests.get(message.id);
if (!pending) {
return;
}
clearTimeout(pending.timer);
this.pendingRequests.delete(message.id);
if (message.error) {
pending.reject(
new Error(message.error.message || 'LSP request failed'),
);
} else {
pending.resolve(message.result);
}
return;
}
if (message?.method && typeof message.id !== 'undefined') {
void this.handleServerRequest(message);
return;
}
if (message?.method) {
for (const handler of this.notificationHandlers) {
try {
handler(message);
} catch {
// ignore handler errors
}
}
}
}
private writeMessage(message: JsonRpcMessage): void {
if (this.disposed) {
return;
}
const json = JSON.stringify(message);
const header = `Content-Length: ${Buffer.byteLength(json, 'utf8')}\r\n\r\n`;
this.writer(header + json);
}
private disposePending(error?: Error): void {
for (const [, pending] of Array.from(this.pendingRequests)) {
clearTimeout(pending.timer);
pending.reject(error ?? new Error('LSP connection closed'));
}
this.pendingRequests.clear();
}
}
interface LspConnection {
connection: JsonRpcConnection;
process?: cp.ChildProcess;
socket?: net.Socket;
}
interface SocketConnectionOptions {
host?: string;
port?: number;
path?: string;
}
interface JsonRpcMessage {
jsonrpc: string;
id?: number | string;
method?: string;
params?: unknown;
result?: unknown;
error?: {
code: number;
message: string;
data?: unknown;
};
}
export class LspConnectionFactory {
/**
* 创建基于 stdio 的 LSP 连接
*/
static async createStdioConnection(
command: string,
args: string[],
options?: cp.SpawnOptions,
timeoutMs = 10000,
): Promise<LspConnection> {
return new Promise((resolve, reject) => {
const spawnOptions: cp.SpawnOptions = {
stdio: 'pipe',
...options,
};
const processInstance = cp.spawn(command, args, spawnOptions);
const timeoutId = setTimeout(() => {
reject(new Error('LSP server spawn timeout'));
if (!processInstance.killed) {
processInstance.kill();
}
}, timeoutMs);
processInstance.once('error', (error) => {
clearTimeout(timeoutId);
reject(new Error(`Failed to spawn LSP server: ${error.message}`));
});
processInstance.once('spawn', () => {
clearTimeout(timeoutId);
if (!processInstance.stdout || !processInstance.stdin) {
reject(new Error('LSP server stdio not available'));
return;
}
const connection = new JsonRpcConnection(
(payload) => processInstance.stdin?.write(payload),
() => processInstance.stdin?.end(),
);
connection.listen(processInstance.stdout);
processInstance.once('exit', () => connection.end());
processInstance.once('close', () => connection.end());
resolve({
connection,
process: processInstance,
});
});
});
}
/**
* 创建基于 TCP 的 LSP 连接
*/
static async createTcpConnection(
host: string,
port: number,
timeoutMs = 10000,
): Promise<LspConnection> {
return LspConnectionFactory.createSocketConnection(
{ host, port },
timeoutMs,
);
}
/**
* 创建基于 socket 的 LSP 连接(支持 TCP 或 unix socket
*/
static async createSocketConnection(
options: SocketConnectionOptions,
timeoutMs = 10000,
): Promise<LspConnection> {
return new Promise((resolve, reject) => {
const socketOptions = options.path
? { path: options.path }
: { host: options.host ?? '127.0.0.1', port: options.port };
if (!('path' in socketOptions) && !socketOptions.port) {
reject(new Error('Socket transport requires port or path'));
return;
}
const socket = net.createConnection(socketOptions);
const timeoutId = setTimeout(() => {
reject(new Error('LSP server connection timeout'));
socket.destroy();
}, timeoutMs);
const onError = (error: Error) => {
clearTimeout(timeoutId);
reject(new Error(`Failed to connect to LSP server: ${error.message}`));
};
socket.once('error', onError);
socket.on('connect', () => {
clearTimeout(timeoutId);
socket.off('error', onError);
const connection = new JsonRpcConnection(
(payload) => socket.write(payload),
() => socket.destroy(),
);
connection.listen(socket);
socket.once('close', () => connection.end());
socket.once('error', () => connection.end());
resolve({
connection,
socket,
});
});
});
}
/**
* 关闭 LSP 连接
*/
static async closeConnection(lspConnection: LspConnection): Promise<void> {
if (lspConnection.connection) {
try {
await lspConnection.connection.shutdown();
} catch (e) {
console.warn('LSP shutdown failed:', e);
} finally {
lspConnection.connection.end();
}
}
if (lspConnection.process && !lspConnection.process.killed) {
lspConnection.process.kill();
}
if (lspConnection.socket && !lspConnection.socket.destroyed) {
lspConnection.socket.destroy();
}
}
}

View File

@@ -1,818 +0,0 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { EventEmitter } from 'events';
import { NativeLspService } from './NativeLspService.js';
import type {
Config as CoreConfig,
WorkspaceContext,
FileDiscoveryService,
IdeContextStore,
LspLocation,
LspDiagnostic,
} from '@qwen-code/qwen-code-core';
import * as fs from 'node:fs';
import * as path from 'node:path';
import { pathToFileURL } from 'node:url';
/**
* Mock LSP server responses for integration testing.
* This simulates real LSP server behavior without requiring an actual server.
*/
const MOCK_LSP_RESPONSES = {
'initialize': {
capabilities: {
textDocumentSync: 1,
completionProvider: {},
hoverProvider: true,
definitionProvider: true,
referencesProvider: true,
documentSymbolProvider: true,
workspaceSymbolProvider: true,
codeActionProvider: true,
diagnosticProvider: {
interFileDependencies: true,
workspaceDiagnostics: true,
},
},
serverInfo: {
name: 'mock-lsp-server',
version: '1.0.0',
},
},
'textDocument/definition': [
{
uri: 'file:///test/workspace/src/types.ts',
range: {
start: { line: 10, character: 0 },
end: { line: 10, character: 20 },
},
},
],
'textDocument/references': [
{
uri: 'file:///test/workspace/src/app.ts',
range: {
start: { line: 5, character: 10 },
end: { line: 5, character: 20 },
},
},
{
uri: 'file:///test/workspace/src/utils.ts',
range: {
start: { line: 15, character: 5 },
end: { line: 15, character: 15 },
},
},
],
'textDocument/hover': {
contents: {
kind: 'markdown',
value: '```typescript\nfunction testFunc(): void\n```\n\nA test function.',
},
range: {
start: { line: 10, character: 0 },
end: { line: 10, character: 8 },
},
},
'textDocument/documentSymbol': [
{
name: 'TestClass',
kind: 5, // Class
range: {
start: { line: 0, character: 0 },
end: { line: 20, character: 1 },
},
selectionRange: {
start: { line: 0, character: 6 },
end: { line: 0, character: 15 },
},
children: [
{
name: 'constructor',
kind: 9, // Constructor
range: {
start: { line: 2, character: 2 },
end: { line: 4, character: 3 },
},
selectionRange: {
start: { line: 2, character: 2 },
end: { line: 2, character: 13 },
},
},
],
},
],
'workspace/symbol': [
{
name: 'TestClass',
kind: 5, // Class
location: {
uri: 'file:///test/workspace/src/test.ts',
range: {
start: { line: 0, character: 0 },
end: { line: 20, character: 1 },
},
},
},
{
name: 'testFunction',
kind: 12, // Function
location: {
uri: 'file:///test/workspace/src/utils.ts',
range: {
start: { line: 5, character: 0 },
end: { line: 10, character: 1 },
},
},
containerName: 'utils',
},
],
'textDocument/implementation': [
{
uri: 'file:///test/workspace/src/impl.ts',
range: {
start: { line: 20, character: 0 },
end: { line: 40, character: 1 },
},
},
],
'textDocument/prepareCallHierarchy': [
{
name: 'testFunction',
kind: 12, // Function
detail: '(param: string) => void',
uri: 'file:///test/workspace/src/utils.ts',
range: {
start: { line: 5, character: 0 },
end: { line: 10, character: 1 },
},
selectionRange: {
start: { line: 5, character: 9 },
end: { line: 5, character: 21 },
},
},
],
'callHierarchy/incomingCalls': [
{
from: {
name: 'callerFunction',
kind: 12,
uri: 'file:///test/workspace/src/caller.ts',
range: {
start: { line: 10, character: 0 },
end: { line: 15, character: 1 },
},
selectionRange: {
start: { line: 10, character: 9 },
end: { line: 10, character: 23 },
},
},
fromRanges: [
{
start: { line: 12, character: 2 },
end: { line: 12, character: 16 },
},
],
},
],
'callHierarchy/outgoingCalls': [
{
to: {
name: 'helperFunction',
kind: 12,
uri: 'file:///test/workspace/src/helper.ts',
range: {
start: { line: 0, character: 0 },
end: { line: 5, character: 1 },
},
selectionRange: {
start: { line: 0, character: 9 },
end: { line: 0, character: 23 },
},
},
fromRanges: [
{
start: { line: 7, character: 2 },
end: { line: 7, character: 16 },
},
],
},
],
'textDocument/diagnostic': {
kind: 'full',
items: [
{
range: {
start: { line: 5, character: 0 },
end: { line: 5, character: 10 },
},
severity: 1, // Error
code: 'TS2304',
source: 'typescript',
message: "Cannot find name 'undeclaredVar'.",
},
{
range: {
start: { line: 10, character: 0 },
end: { line: 10, character: 15 },
},
severity: 2, // Warning
code: 'TS6133',
source: 'typescript',
message: "'unusedVar' is declared but its value is never read.",
tags: [1], // Unnecessary
},
],
},
'workspace/diagnostic': {
items: [
{
kind: 'full',
uri: 'file:///test/workspace/src/app.ts',
items: [
{
range: {
start: { line: 5, character: 0 },
end: { line: 5, character: 10 },
},
severity: 1,
code: 'TS2304',
source: 'typescript',
message: "Cannot find name 'undeclaredVar'.",
},
],
},
{
kind: 'full',
uri: 'file:///test/workspace/src/utils.ts',
items: [
{
range: {
start: { line: 10, character: 0 },
end: { line: 10, character: 15 },
},
severity: 2,
code: 'TS6133',
source: 'typescript',
message: "'unusedVar' is declared but its value is never read.",
},
],
},
],
},
'textDocument/codeAction': [
{
title: "Add missing import 'React'",
kind: 'quickfix',
diagnostics: [
{
range: {
start: { line: 0, character: 0 },
end: { line: 0, character: 5 },
},
severity: 1,
message: "Cannot find name 'React'.",
},
],
edit: {
changes: {
'file:///test/workspace/src/app.tsx': [
{
range: {
start: { line: 0, character: 0 },
end: { line: 0, character: 0 },
},
newText: "import React from 'react';\n",
},
],
},
},
isPreferred: true,
},
{
title: 'Organize imports',
kind: 'source.organizeImports',
edit: {
changes: {
'file:///test/workspace/src/app.tsx': [
{
range: {
start: { line: 0, character: 0 },
end: { line: 5, character: 0 },
},
newText: "import { Component } from 'react';\nimport { helper } from './utils';\n",
},
],
},
},
},
],
};
/**
* Mock configuration for testing.
*/
class MockConfig {
rootPath = '/test/workspace';
private trusted = true;
isTrustedFolder(): boolean {
return this.trusted;
}
setTrusted(trusted: boolean): void {
this.trusted = trusted;
}
get(_key: string) {
return undefined;
}
getProjectRoot(): string {
return this.rootPath;
}
}
/**
* Mock workspace context for testing.
*/
class MockWorkspaceContext {
rootPath = '/test/workspace';
async fileExists(filePath: string): Promise<boolean> {
return (
filePath.endsWith('.json') ||
filePath.includes('package.json') ||
filePath.includes('.ts')
);
}
async readFile(filePath: string): Promise<string> {
if (filePath.includes('.lsp.json')) {
return JSON.stringify({
'mock-lsp': {
languages: ['typescript', 'javascript'],
command: 'mock-lsp-server',
args: ['--stdio'],
transport: 'stdio',
},
});
}
return '{}';
}
resolvePath(relativePath: string): string {
return this.rootPath + '/' + relativePath;
}
isPathWithinWorkspace(_path: string): boolean {
return true;
}
getDirectories(): string[] {
return [this.rootPath];
}
}
/**
* Mock file discovery service for testing.
*/
class MockFileDiscoveryService {
async discoverFiles(_root: string, _options: unknown): Promise<string[]> {
return [
'/test/workspace/src/index.ts',
'/test/workspace/src/app.ts',
'/test/workspace/src/utils.ts',
'/test/workspace/src/types.ts',
];
}
shouldIgnoreFile(file: string): boolean {
return file.includes('node_modules') || file.includes('.git');
}
}
/**
* Mock IDE context store for testing.
*/
class MockIdeContextStore {}
describe('NativeLspService Integration Tests', () => {
let lspService: NativeLspService;
let mockConfig: MockConfig;
let mockWorkspace: MockWorkspaceContext;
let mockFileDiscovery: MockFileDiscoveryService;
let mockIdeStore: MockIdeContextStore;
let eventEmitter: EventEmitter;
beforeEach(() => {
mockConfig = new MockConfig();
mockWorkspace = new MockWorkspaceContext();
mockFileDiscovery = new MockFileDiscoveryService();
mockIdeStore = new MockIdeContextStore();
eventEmitter = new EventEmitter();
lspService = new NativeLspService(
mockConfig as unknown as CoreConfig,
mockWorkspace as unknown as WorkspaceContext,
eventEmitter,
mockFileDiscovery as unknown as FileDiscoveryService,
mockIdeStore as unknown as IdeContextStore,
{
workspaceRoot: mockWorkspace.rootPath,
},
);
});
afterEach(() => {
vi.clearAllMocks();
});
describe('Service Lifecycle', () => {
it('should initialize service correctly', () => {
expect(lspService).toBeDefined();
});
it('should discover and prepare without errors', async () => {
await expect(lspService.discoverAndPrepare()).resolves.not.toThrow();
});
it('should return status after discovery', async () => {
await lspService.discoverAndPrepare();
const status = lspService.getStatus();
expect(status).toBeDefined();
expect(status instanceof Map).toBe(true);
});
it('should skip discovery for untrusted workspace', async () => {
mockConfig.setTrusted(false);
const untrustedService = new NativeLspService(
mockConfig as unknown as CoreConfig,
mockWorkspace as unknown as WorkspaceContext,
eventEmitter,
mockFileDiscovery as unknown as FileDiscoveryService,
mockIdeStore as unknown as IdeContextStore,
{
workspaceRoot: mockWorkspace.rootPath,
requireTrustedWorkspace: true,
},
);
await untrustedService.discoverAndPrepare();
const status = untrustedService.getStatus();
expect(status.size).toBe(0);
});
});
describe('Configuration Merging', () => {
it('should detect TypeScript/JavaScript in workspace', async () => {
await lspService.discoverAndPrepare();
const status = lspService.getStatus();
// Should have detected TypeScript based on mock file discovery
// The exact server name depends on built-in presets
expect(status.size).toBeGreaterThanOrEqual(0);
});
it('should respect allowed servers list', async () => {
const restrictedService = new NativeLspService(
mockConfig as unknown as CoreConfig,
mockWorkspace as unknown as WorkspaceContext,
eventEmitter,
mockFileDiscovery as unknown as FileDiscoveryService,
mockIdeStore as unknown as IdeContextStore,
{
workspaceRoot: mockWorkspace.rootPath,
allowedServers: ['typescript-language-server'],
},
);
await restrictedService.discoverAndPrepare();
const status = restrictedService.getStatus();
// Only allowed servers should be READY
const readyServers = Array.from(status.entries())
.filter(([, state]) => state === 'READY')
.map(([name]) => name);
for (const name of readyServers) {
expect(['typescript-language-server']).toContain(name);
}
});
it('should respect excluded servers list', async () => {
const restrictedService = new NativeLspService(
mockConfig as unknown as CoreConfig,
mockWorkspace as unknown as WorkspaceContext,
eventEmitter,
mockFileDiscovery as unknown as FileDiscoveryService,
mockIdeStore as unknown as IdeContextStore,
{
workspaceRoot: mockWorkspace.rootPath,
excludedServers: ['pylsp'],
},
);
await restrictedService.discoverAndPrepare();
const status = restrictedService.getStatus();
// pylsp should not be present or should be FAILED
const pylspStatus = status.get('pylsp');
expect(pylspStatus !== 'READY').toBe(true);
});
});
describe('LSP Operations - Mock Responses', () => {
// Note: These tests verify the structure of expected responses
// In a real integration test, you would mock the connection or use a real server
it('should format definition response correctly', () => {
const response = MOCK_LSP_RESPONSES['textDocument/definition'];
expect(response).toHaveLength(1);
expect(response[0]).toHaveProperty('uri');
expect(response[0]).toHaveProperty('range');
expect(response[0].range.start).toHaveProperty('line');
expect(response[0].range.start).toHaveProperty('character');
});
it('should format references response correctly', () => {
const response = MOCK_LSP_RESPONSES['textDocument/references'];
expect(response).toHaveLength(2);
for (const ref of response) {
expect(ref).toHaveProperty('uri');
expect(ref).toHaveProperty('range');
}
});
it('should format hover response correctly', () => {
const response = MOCK_LSP_RESPONSES['textDocument/hover'];
expect(response).toHaveProperty('contents');
expect(response.contents).toHaveProperty('value');
expect(response.contents.value).toContain('testFunc');
});
it('should format document symbols correctly', () => {
const response = MOCK_LSP_RESPONSES['textDocument/documentSymbol'];
expect(response).toHaveLength(1);
expect(response[0].name).toBe('TestClass');
expect(response[0].kind).toBe(5); // Class
expect(response[0].children).toHaveLength(1);
});
it('should format workspace symbols correctly', () => {
const response = MOCK_LSP_RESPONSES['workspace/symbol'];
expect(response).toHaveLength(2);
expect(response[0].name).toBe('TestClass');
expect(response[1].name).toBe('testFunction');
expect(response[1].containerName).toBe('utils');
});
it('should format call hierarchy items correctly', () => {
const response = MOCK_LSP_RESPONSES['textDocument/prepareCallHierarchy'];
expect(response).toHaveLength(1);
expect(response[0].name).toBe('testFunction');
expect(response[0]).toHaveProperty('detail');
expect(response[0]).toHaveProperty('range');
expect(response[0]).toHaveProperty('selectionRange');
});
it('should format incoming calls correctly', () => {
const response = MOCK_LSP_RESPONSES['callHierarchy/incomingCalls'];
expect(response).toHaveLength(1);
expect(response[0].from.name).toBe('callerFunction');
expect(response[0].fromRanges).toHaveLength(1);
});
it('should format outgoing calls correctly', () => {
const response = MOCK_LSP_RESPONSES['callHierarchy/outgoingCalls'];
expect(response).toHaveLength(1);
expect(response[0].to.name).toBe('helperFunction');
expect(response[0].fromRanges).toHaveLength(1);
});
it('should format diagnostics correctly', () => {
const response = MOCK_LSP_RESPONSES['textDocument/diagnostic'];
expect(response.items).toHaveLength(2);
expect(response.items[0].severity).toBe(1); // Error
expect(response.items[0].code).toBe('TS2304');
expect(response.items[1].severity).toBe(2); // Warning
expect(response.items[1].tags).toContain(1); // Unnecessary
});
it('should format workspace diagnostics correctly', () => {
const response = MOCK_LSP_RESPONSES['workspace/diagnostic'];
expect(response.items).toHaveLength(2);
expect(response.items[0].uri).toContain('app.ts');
expect(response.items[1].uri).toContain('utils.ts');
});
it('should format code actions correctly', () => {
const response = MOCK_LSP_RESPONSES['textDocument/codeAction'];
expect(response).toHaveLength(2);
const quickfix = response[0];
expect(quickfix.title).toContain('import');
expect(quickfix.kind).toBe('quickfix');
expect(quickfix.isPreferred).toBe(true);
expect(quickfix.edit).toHaveProperty('changes');
const organizeImports = response[1];
expect(organizeImports.kind).toBe('source.organizeImports');
});
});
describe('Diagnostic Normalization', () => {
it('should normalize severity levels correctly', () => {
const severityMap: Record<number, string> = {
1: 'error',
2: 'warning',
3: 'information',
4: 'hint',
};
for (const [num, label] of Object.entries(severityMap)) {
expect(severityMap[Number(num)]).toBe(label);
}
});
it('should normalize diagnostic tags correctly', () => {
const tagMap: Record<number, string> = {
1: 'unnecessary',
2: 'deprecated',
};
expect(tagMap[1]).toBe('unnecessary');
expect(tagMap[2]).toBe('deprecated');
});
});
describe('Code Action Context', () => {
it('should support filtering by code action kind', () => {
const kinds = ['quickfix', 'refactor', 'source.organizeImports'];
const filteredActions = MOCK_LSP_RESPONSES['textDocument/codeAction'].filter(
(action) => kinds.includes(action.kind),
);
expect(filteredActions).toHaveLength(2);
});
it('should support quick fix actions with diagnostics', () => {
const quickfix = MOCK_LSP_RESPONSES['textDocument/codeAction'][0];
expect(quickfix.diagnostics).toBeDefined();
expect(quickfix.diagnostics).toHaveLength(1);
expect(quickfix.edit).toBeDefined();
});
});
describe('Workspace Edit Application', () => {
it('should structure workspace edits correctly', () => {
const codeAction = MOCK_LSP_RESPONSES['textDocument/codeAction'][0];
const edit = codeAction.edit;
expect(edit).toHaveProperty('changes');
expect(edit?.changes).toBeDefined();
const uri = Object.keys(edit?.changes ?? {})[0];
expect(uri).toContain('app.tsx');
const edits = edit?.changes?.[uri];
expect(edits).toHaveLength(1);
expect(edits?.[0]).toHaveProperty('range');
expect(edits?.[0]).toHaveProperty('newText');
});
});
describe('Error Handling', () => {
it('should handle missing workspace gracefully', async () => {
const emptyWorkspace = new MockWorkspaceContext();
emptyWorkspace.getDirectories = () => [];
const service = new NativeLspService(
mockConfig as unknown as CoreConfig,
emptyWorkspace as unknown as WorkspaceContext,
eventEmitter,
mockFileDiscovery as unknown as FileDiscoveryService,
mockIdeStore as unknown as IdeContextStore,
);
await expect(service.discoverAndPrepare()).resolves.not.toThrow();
});
it('should return empty results when no server is ready', async () => {
// Before starting any servers, operations should return empty
const results = await lspService.workspaceSymbols('test');
expect(results).toEqual([]);
});
it('should return empty diagnostics when no server is ready', async () => {
const uri = 'file:///test/workspace/src/app.ts';
const results = await lspService.diagnostics(uri);
expect(results).toEqual([]);
});
it('should return empty code actions when no server is ready', async () => {
const uri = 'file:///test/workspace/src/app.ts';
const range = {
start: { line: 0, character: 0 },
end: { line: 0, character: 10 },
};
const context = {
diagnostics: [],
only: undefined,
triggerKind: 'invoked' as const,
};
const results = await lspService.codeActions(uri, range, context);
expect(results).toEqual([]);
});
});
describe('Security Controls', () => {
it('should respect trust requirements', async () => {
mockConfig.setTrusted(false);
const strictService = new NativeLspService(
mockConfig as unknown as CoreConfig,
mockWorkspace as unknown as WorkspaceContext,
eventEmitter,
mockFileDiscovery as unknown as FileDiscoveryService,
mockIdeStore as unknown as IdeContextStore,
{
requireTrustedWorkspace: true,
},
);
await strictService.discoverAndPrepare();
const status = strictService.getStatus();
// No servers should be discovered in untrusted workspace
expect(status.size).toBe(0);
});
it('should allow operations in trusted workspace', async () => {
mockConfig.setTrusted(true);
await lspService.discoverAndPrepare();
// Service should be ready to accept operations (even if no real server)
expect(lspService).toBeDefined();
});
});
});
describe('LSP Response Type Validation', () => {
describe('LspDiagnostic', () => {
it('should have correct structure', () => {
const diagnostic: LspDiagnostic = {
range: {
start: { line: 0, character: 0 },
end: { line: 0, character: 10 },
},
severity: 'error',
code: 'TS2304',
source: 'typescript',
message: 'Cannot find name.',
};
expect(diagnostic.range).toBeDefined();
expect(diagnostic.severity).toBe('error');
expect(diagnostic.code).toBe('TS2304');
expect(diagnostic.source).toBe('typescript');
expect(diagnostic.message).toBeDefined();
});
it('should support optional fields', () => {
const minimalDiagnostic: LspDiagnostic = {
range: {
start: { line: 0, character: 0 },
end: { line: 0, character: 10 },
},
message: 'Error message',
};
expect(minimalDiagnostic.severity).toBeUndefined();
expect(minimalDiagnostic.code).toBeUndefined();
expect(minimalDiagnostic.source).toBeUndefined();
});
});
describe('LspLocation', () => {
it('should have correct structure', () => {
const location: LspLocation = {
uri: 'file:///test/file.ts',
range: {
start: { line: 10, character: 5 },
end: { line: 10, character: 15 },
},
};
expect(location.uri).toBe('file:///test/file.ts');
expect(location.range.start.line).toBe(10);
expect(location.range.start.character).toBe(5);
expect(location.range.end.line).toBe(10);
expect(location.range.end.character).toBe(15);
});
});
});

View File

@@ -1,127 +0,0 @@
import { NativeLspService } from './NativeLspService.js';
import { EventEmitter } from 'events';
import type {
Config as CoreConfig,
WorkspaceContext,
FileDiscoveryService,
IdeContextStore,
} from '@qwen-code/qwen-code-core';
// 模拟依赖项
class MockConfig {
rootPath = '/test/workspace';
isTrustedFolder(): boolean {
return true;
}
get(_key: string) {
return undefined;
}
getProjectRoot(): string {
return this.rootPath;
}
}
class MockWorkspaceContext {
rootPath = '/test/workspace';
async fileExists(_path: string): Promise<boolean> {
return _path.endsWith('.json') || _path.includes('package.json');
}
async readFile(_path: string): Promise<string> {
if (_path.includes('.lsp.json')) {
return JSON.stringify({
typescript: {
command: 'typescript-language-server',
args: ['--stdio'],
transport: 'stdio',
},
});
}
return '{}';
}
resolvePath(_path: string): string {
return this.rootPath + '/' + _path;
}
isPathWithinWorkspace(_path: string): boolean {
return true;
}
getDirectories(): string[] {
return [this.rootPath];
}
}
class MockFileDiscoveryService {
async discoverFiles(_root: string, _options: unknown): Promise<string[]> {
// 模拟发现一些文件
return [
'/test/workspace/src/index.ts',
'/test/workspace/src/utils.ts',
'/test/workspace/server.py',
'/test/workspace/main.go',
];
}
shouldIgnoreFile(): boolean {
return false;
}
}
class MockIdeContextStore {
// 模拟 IDE 上下文存储
}
describe('NativeLspService', () => {
let lspService: NativeLspService;
let mockConfig: MockConfig;
let mockWorkspace: MockWorkspaceContext;
let mockFileDiscovery: MockFileDiscoveryService;
let mockIdeStore: MockIdeContextStore;
let eventEmitter: EventEmitter;
beforeEach(() => {
mockConfig = new MockConfig();
mockWorkspace = new MockWorkspaceContext();
mockFileDiscovery = new MockFileDiscoveryService();
mockIdeStore = new MockIdeContextStore();
eventEmitter = new EventEmitter();
lspService = new NativeLspService(
mockConfig as unknown as CoreConfig,
mockWorkspace as unknown as WorkspaceContext,
eventEmitter,
mockFileDiscovery as unknown as FileDiscoveryService,
mockIdeStore as unknown as IdeContextStore,
);
});
test('should initialize correctly', () => {
expect(lspService).toBeDefined();
});
test('should detect languages from workspace files', async () => {
// 这个测试需要修改,因为我们无法直接访问私有方法
await lspService.discoverAndPrepare();
const status = lspService.getStatus();
// 检查服务是否已准备就绪
expect(status).toBeDefined();
});
test('should merge built-in presets with user configs', async () => {
await lspService.discoverAndPrepare();
const status = lspService.getStatus();
// 检查服务是否已准备就绪
expect(status).toBeDefined();
});
});
// 注意:实际的单元测试需要适当的测试框架配置
// 这里只是一个结构示例

File diff suppressed because it is too large Load Diff

View File

@@ -6,12 +6,10 @@
import { render } from 'ink-testing-library';
import type React from 'react';
import type { Config } from '@qwen-code/qwen-code-core';
import { LoadedSettings } from '../config/settings.js';
import { KeypressProvider } from '../ui/contexts/KeypressContext.js';
import { SettingsContext } from '../ui/contexts/SettingsContext.js';
import { ShellFocusContext } from '../ui/contexts/ShellFocusContext.js';
import { ConfigContext } from '../ui/contexts/ConfigContext.js';
const mockSettings = new LoadedSettings(
{ path: '', settings: {}, originalSettings: {} },
@@ -24,24 +22,14 @@ const mockSettings = new LoadedSettings(
export const renderWithProviders = (
component: React.ReactElement,
{
shellFocus = true,
settings = mockSettings,
config = undefined,
}: {
shellFocus?: boolean;
settings?: LoadedSettings;
config?: Config;
} = {},
{ shellFocus = true, settings = mockSettings } = {},
): ReturnType<typeof render> =>
render(
<SettingsContext.Provider value={settings}>
<ConfigContext.Provider value={config}>
<ShellFocusContext.Provider value={shellFocus}>
<KeypressProvider kittyProtocolEnabled={true}>
{component}
</KeypressProvider>
</ShellFocusContext.Provider>
</ConfigContext.Provider>
<ShellFocusContext.Provider value={shellFocus}>
<KeypressProvider kittyProtocolEnabled={true}>
{component}
</KeypressProvider>
</ShellFocusContext.Provider>
</SettingsContext.Provider>,
);

View File

@@ -32,6 +32,7 @@ import {
type Config,
type IdeInfo,
type IdeContext,
DEFAULT_GEMINI_FLASH_MODEL,
IdeClient,
ideContextStore,
getErrorMessage,
@@ -45,7 +46,6 @@ import process from 'node:process';
import { useHistory } from './hooks/useHistoryManager.js';
import { useMemoryMonitor } from './hooks/useMemoryMonitor.js';
import { useThemeCommand } from './hooks/useThemeCommand.js';
import { useFeedbackDialog } from './hooks/useFeedbackDialog.js';
import { useAuthCommand } from './auth/useAuth.js';
import { useEditorSettings } from './hooks/useEditorSettings.js';
import { useSettingsCommand } from './hooks/useSettingsCommand.js';
@@ -180,10 +180,15 @@ export const AppContainer = (props: AppContainerProps) => {
[],
);
// Helper to determine the current model (polled, since Config has no model-change event).
const getCurrentModel = useCallback(() => config.getModel(), [config]);
// Helper to determine the effective model, considering the fallback state.
const getEffectiveModel = useCallback(() => {
if (config.isInFallbackMode()) {
return DEFAULT_GEMINI_FLASH_MODEL;
}
return config.getModel();
}, [config]);
const [currentModel, setCurrentModel] = useState(getCurrentModel());
const [currentModel, setCurrentModel] = useState(getEffectiveModel());
const [isConfigInitialized, setConfigInitialized] = useState(false);
@@ -236,12 +241,12 @@ export const AppContainer = (props: AppContainerProps) => {
[historyManager.addItem],
);
// Watch for model changes (e.g., user switches model via /model)
// Watch for model changes (e.g., from Flash fallback)
useEffect(() => {
const checkModelChange = () => {
const model = getCurrentModel();
if (model !== currentModel) {
setCurrentModel(model);
const effectiveModel = getEffectiveModel();
if (effectiveModel !== currentModel) {
setCurrentModel(effectiveModel);
}
};
@@ -249,7 +254,7 @@ export const AppContainer = (props: AppContainerProps) => {
const interval = setInterval(checkModelChange, 1000); // Check every second
return () => clearInterval(interval);
}, [config, currentModel, getCurrentModel]);
}, [config, currentModel, getEffectiveModel]);
const {
consoleMessages,
@@ -371,36 +376,37 @@ export const AppContainer = (props: AppContainerProps) => {
// Check for enforced auth type mismatch
useEffect(() => {
// Check for initialization error first
const currentAuthType = config.modelsConfig.getCurrentAuthType();
if (
settings.merged.security?.auth?.enforcedType &&
currentAuthType &&
settings.merged.security?.auth.enforcedType !== currentAuthType
settings.merged.security?.auth.selectedType &&
settings.merged.security?.auth.enforcedType !==
settings.merged.security?.auth.selectedType
) {
onAuthError(
t(
'Authentication is enforced to be {{enforcedType}}, but you are currently using {{currentType}}.',
{
enforcedType: String(settings.merged.security?.auth.enforcedType),
currentType: String(currentAuthType),
enforcedType: settings.merged.security?.auth.enforcedType,
currentType: settings.merged.security?.auth.selectedType,
},
),
);
} else if (!settings.merged.security?.auth?.useExternal) {
// If no authType is selected yet, allow the auth UI flow to prompt the user.
// Only validate credentials once a concrete authType exists.
if (currentAuthType) {
const error = validateAuthMethod(currentAuthType, config);
if (error) {
onAuthError(error);
}
} else if (
settings.merged.security?.auth?.selectedType &&
!settings.merged.security?.auth?.useExternal
) {
const error = validateAuthMethod(
settings.merged.security.auth.selectedType,
);
if (error) {
onAuthError(error);
}
}
}, [
settings.merged.security?.auth?.selectedType,
settings.merged.security?.auth?.enforcedType,
settings.merged.security?.auth?.useExternal,
config,
onAuthError,
]);
@@ -576,6 +582,7 @@ export const AppContainer = (props: AppContainerProps) => {
config.getExtensionContextFilePaths(),
config.isTrustedFolder(),
settings.merged.context?.importFormat || 'tree', // Use setting or default to 'tree'
config.getFileFilteringOptions(),
);
config.setUserMemory(memoryContent);
@@ -1196,19 +1203,6 @@ export const AppContainer = (props: AppContainerProps) => {
isApprovalModeDialogOpen ||
isResumeDialogOpen;
const {
isFeedbackDialogOpen,
openFeedbackDialog,
closeFeedbackDialog,
submitFeedback,
} = useFeedbackDialog({
config,
settings,
streamingState,
history: historyManager.history,
sessionStats,
});
const pendingHistoryItems = useMemo(
() => [...pendingSlashCommandHistoryItems, ...pendingGeminiHistoryItems],
[pendingSlashCommandHistoryItems, pendingGeminiHistoryItems],
@@ -1305,8 +1299,6 @@ export const AppContainer = (props: AppContainerProps) => {
// Subagent dialogs
isSubagentCreateDialogOpen,
isAgentsManagerDialogOpen,
// Feedback dialog
isFeedbackDialogOpen,
}),
[
isThemeDialogOpen,
@@ -1397,8 +1389,6 @@ export const AppContainer = (props: AppContainerProps) => {
// Subagent dialogs
isSubagentCreateDialogOpen,
isAgentsManagerDialogOpen,
// Feedback dialog
isFeedbackDialogOpen,
],
);
@@ -1439,10 +1429,6 @@ export const AppContainer = (props: AppContainerProps) => {
openResumeDialog,
closeResumeDialog,
handleResume,
// Feedback dialog
openFeedbackDialog,
closeFeedbackDialog,
submitFeedback,
}),
[
handleThemeSelect,
@@ -1478,10 +1464,6 @@ export const AppContainer = (props: AppContainerProps) => {
openResumeDialog,
closeResumeDialog,
handleResume,
// Feedback dialog
openFeedbackDialog,
closeFeedbackDialog,
submitFeedback,
],
);

View File

@@ -1,61 +0,0 @@
import { Box, Text } from 'ink';
import type React from 'react';
import { t } from '../i18n/index.js';
import { useUIActions } from './contexts/UIActionsContext.js';
import { useUIState } from './contexts/UIStateContext.js';
import { useKeypress } from './hooks/useKeypress.js';
const FEEDBACK_OPTIONS = {
GOOD: 1,
BAD: 2,
NOT_SURE: 3,
} as const;
const FEEDBACK_OPTION_KEYS = {
[FEEDBACK_OPTIONS.GOOD]: '1',
[FEEDBACK_OPTIONS.BAD]: '2',
[FEEDBACK_OPTIONS.NOT_SURE]: 'any',
} as const;
export const FEEDBACK_DIALOG_KEYS = ['1', '2'] as const;
export const FeedbackDialog: React.FC = () => {
const uiState = useUIState();
const uiActions = useUIActions();
useKeypress(
(key) => {
if (key.name === FEEDBACK_OPTION_KEYS[FEEDBACK_OPTIONS.GOOD]) {
uiActions.submitFeedback(FEEDBACK_OPTIONS.GOOD);
} else if (key.name === FEEDBACK_OPTION_KEYS[FEEDBACK_OPTIONS.BAD]) {
uiActions.submitFeedback(FEEDBACK_OPTIONS.BAD);
} else {
uiActions.submitFeedback(FEEDBACK_OPTIONS.NOT_SURE);
}
uiActions.closeFeedbackDialog();
},
{ isActive: uiState.isFeedbackDialogOpen },
);
return (
<Box flexDirection="column" marginY={1}>
<Box>
<Text color="cyan"> </Text>
<Text bold>{t('How is Qwen doing this session? (optional)')}</Text>
</Box>
<Box marginTop={1}>
<Text color="cyan">
{FEEDBACK_OPTION_KEYS[FEEDBACK_OPTIONS.GOOD]}:{' '}
</Text>
<Text>{t('Good')}</Text>
<Text> </Text>
<Text color="cyan">{FEEDBACK_OPTION_KEYS[FEEDBACK_OPTIONS.BAD]}: </Text>
<Text>{t('Bad')}</Text>
<Text> </Text>
<Text color="cyan">{t('Any other key')}: </Text>
<Text>{t('Not Sure Yet')}</Text>
</Box>
</Box>
);
};

View File

@@ -6,8 +6,7 @@
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { AuthDialog } from './AuthDialog.js';
import { LoadedSettings } from '../../config/settings.js';
import type { Config } from '@qwen-code/qwen-code-core';
import { LoadedSettings, SettingScope } from '../../config/settings.js';
import { AuthType } from '@qwen-code/qwen-code-core';
import { renderWithProviders } from '../../test-utils/render.js';
import { UIStateContext } from '../contexts/UIStateContext.js';
@@ -44,24 +43,17 @@ const renderAuthDialog = (
settings: LoadedSettings,
uiStateOverrides: Partial<UIState> = {},
uiActionsOverrides: Partial<UIActions> = {},
configAuthType: AuthType | undefined = undefined,
configApiKey: string | undefined = undefined,
) => {
const uiState = createMockUIState(uiStateOverrides);
const uiActions = createMockUIActions(uiActionsOverrides);
const mockConfig = {
getAuthType: vi.fn(() => configAuthType),
getContentGeneratorConfig: vi.fn(() => ({ apiKey: configApiKey })),
} as unknown as Config;
return renderWithProviders(
<UIStateContext.Provider value={uiState}>
<UIActionsContext.Provider value={uiActions}>
<AuthDialog />
</UIActionsContext.Provider>
</UIStateContext.Provider>,
{ settings, config: mockConfig },
{ settings },
);
};
@@ -429,7 +421,6 @@ describe('AuthDialog', () => {
settings,
{},
{ handleAuthSelect },
undefined, // config.getAuthType() returns undefined
);
await wait();
@@ -484,7 +475,6 @@ describe('AuthDialog', () => {
settings,
{ authError: 'Initial error' },
{ handleAuthSelect },
undefined, // config.getAuthType() returns undefined
);
await wait();
@@ -538,7 +528,6 @@ describe('AuthDialog', () => {
settings,
{},
{ handleAuthSelect },
AuthType.USE_OPENAI, // config.getAuthType() returns USE_OPENAI
);
await wait();
@@ -547,7 +536,7 @@ describe('AuthDialog', () => {
await wait();
// Should call handleAuthSelect with undefined to exit
expect(handleAuthSelect).toHaveBeenCalledWith(undefined);
expect(handleAuthSelect).toHaveBeenCalledWith(undefined, SettingScope.User);
unmount();
});
});

View File

@@ -8,12 +8,13 @@ import type React from 'react';
import { useState } from 'react';
import { AuthType } from '@qwen-code/qwen-code-core';
import { Box, Text } from 'ink';
import { SettingScope } from '../../config/settings.js';
import { Colors } from '../colors.js';
import { useKeypress } from '../hooks/useKeypress.js';
import { RadioButtonSelect } from '../components/shared/RadioButtonSelect.js';
import { useUIState } from '../contexts/UIStateContext.js';
import { useUIActions } from '../contexts/UIActionsContext.js';
import { useConfig } from '../contexts/ConfigContext.js';
import { useSettings } from '../contexts/SettingsContext.js';
import { t } from '../../i18n/index.js';
function parseDefaultAuthType(
@@ -31,7 +32,7 @@ function parseDefaultAuthType(
export function AuthDialog(): React.JSX.Element {
const { pendingAuthType, authError } = useUIState();
const { handleAuthSelect: onAuthSelect } = useUIActions();
const config = useConfig();
const settings = useSettings();
const [errorMessage, setErrorMessage] = useState<string | null>(null);
const [selectedIndex, setSelectedIndex] = useState<number | null>(null);
@@ -57,10 +58,9 @@ export function AuthDialog(): React.JSX.Element {
return item.value === pendingAuthType;
}
// Priority 2: config.getAuthType() - the source of truth
const currentAuthType = config.getAuthType();
if (currentAuthType) {
return item.value === currentAuthType;
// Priority 2: settings.merged.security?.auth?.selectedType
if (settings.merged.security?.auth?.selectedType) {
return item.value === settings.merged.security?.auth?.selectedType;
}
// Priority 3: QWEN_DEFAULT_AUTH_TYPE env var
@@ -76,7 +76,7 @@ export function AuthDialog(): React.JSX.Element {
}),
);
const hasApiKey = Boolean(config.getContentGeneratorConfig()?.apiKey);
const hasApiKey = Boolean(settings.merged.security?.auth?.apiKey);
const currentSelectedAuthType =
selectedIndex !== null
? items[selectedIndex]?.value
@@ -84,7 +84,7 @@ export function AuthDialog(): React.JSX.Element {
const handleAuthSelect = async (authMethod: AuthType) => {
setErrorMessage(null);
await onAuthSelect(authMethod);
await onAuthSelect(authMethod, SettingScope.User);
};
const handleHighlight = (authMethod: AuthType) => {
@@ -100,7 +100,7 @@ export function AuthDialog(): React.JSX.Element {
if (errorMessage) {
return;
}
if (config.getAuthType() === undefined) {
if (settings.merged.security?.auth?.selectedType === undefined) {
// Prevent exiting if no auth method is set
setErrorMessage(
t(
@@ -109,7 +109,7 @@ export function AuthDialog(): React.JSX.Element {
);
return;
}
onAuthSelect(undefined);
onAuthSelect(undefined, SettingScope.User);
}
},
{ isActive: true },

View File

@@ -4,20 +4,16 @@
* SPDX-License-Identifier: Apache-2.0
*/
import type {
Config,
ContentGeneratorConfig,
ModelProvidersConfig,
} from '@qwen-code/qwen-code-core';
import type { Config } from '@qwen-code/qwen-code-core';
import {
AuthEvent,
AuthType,
clearCachedCredentialFile,
getErrorMessage,
logAuth,
} from '@qwen-code/qwen-code-core';
import { useCallback, useEffect, useState } from 'react';
import type { LoadedSettings } from '../../config/settings.js';
import { getPersistScopeForModelSelection } from '../../config/modelProvidersScope.js';
import type { LoadedSettings, SettingScope } from '../../config/settings.js';
import type { OpenAICredentials } from '../components/OpenAIKeyPrompt.js';
import { useQwenAuth } from '../hooks/useQwenAuth.js';
import { AuthState, MessageType } from '../types.js';
@@ -31,7 +27,8 @@ export const useAuthCommand = (
config: Config,
addItem: (item: Omit<HistoryItem, 'id'>, timestamp: number) => void,
) => {
const unAuthenticated = config.getAuthType() === undefined;
const unAuthenticated =
settings.merged.security?.auth?.selectedType === undefined;
const [authState, setAuthState] = useState<AuthState>(
unAuthenticated ? AuthState.Updating : AuthState.Unauthenticated,
@@ -84,46 +81,35 @@ export const useAuthCommand = (
);
const handleAuthSuccess = useCallback(
async (authType: AuthType, credentials?: OpenAICredentials) => {
async (
authType: AuthType,
scope: SettingScope,
credentials?: OpenAICredentials,
) => {
try {
const authTypeScope = getPersistScopeForModelSelection(settings);
// Persist authType
settings.setValue(
authTypeScope,
'security.auth.selectedType',
authType,
);
// Persist model from ContentGenerator config (handles fallback cases)
// This ensures that when syncAfterAuthRefresh falls back to default model,
// it gets persisted to settings.json
const contentGeneratorConfig = config.getContentGeneratorConfig();
if (contentGeneratorConfig?.model) {
settings.setValue(
authTypeScope,
'model.name',
contentGeneratorConfig.model,
);
}
settings.setValue(scope, 'security.auth.selectedType', authType);
// Only update credentials if not switching to QWEN_OAUTH,
// so that OpenAI credentials are preserved when switching to QWEN_OAUTH.
if (authType !== AuthType.QWEN_OAUTH && credentials) {
if (credentials?.apiKey != null) {
settings.setValue(
authTypeScope,
scope,
'security.auth.apiKey',
credentials.apiKey,
);
}
if (credentials?.baseUrl != null) {
settings.setValue(
authTypeScope,
scope,
'security.auth.baseUrl',
credentials.baseUrl,
);
}
if (credentials?.model != null) {
settings.setValue(scope, 'model.name', credentials.model);
}
await clearCachedCredentialFile();
}
} catch (error) {
handleAuthFailure(error);
@@ -155,10 +141,14 @@ export const useAuthCommand = (
);
const performAuth = useCallback(
async (authType: AuthType, credentials?: OpenAICredentials) => {
async (
authType: AuthType,
scope: SettingScope,
credentials?: OpenAICredentials,
) => {
try {
await config.refreshAuth(authType);
handleAuthSuccess(authType, credentials);
handleAuthSuccess(authType, scope, credentials);
} catch (e) {
handleAuthFailure(e);
}
@@ -166,51 +156,18 @@ export const useAuthCommand = (
[config, handleAuthSuccess, handleAuthFailure],
);
const isProviderManagedModel = useCallback(
(authType: AuthType, modelId: string | undefined) => {
if (!modelId) {
return false;
}
const modelProviders = settings.merged.modelProviders as
| ModelProvidersConfig
| undefined;
if (!modelProviders) {
return false;
}
const providerModels = modelProviders[authType];
if (!Array.isArray(providerModels)) {
return false;
}
return providerModels.some(
(providerModel) => providerModel.id === modelId,
);
},
[settings],
);
const handleAuthSelect = useCallback(
async (authType: AuthType | undefined, credentials?: OpenAICredentials) => {
async (
authType: AuthType | undefined,
scope: SettingScope,
credentials?: OpenAICredentials,
) => {
if (!authType) {
setIsAuthDialogOpen(false);
setAuthError(null);
return;
}
if (
authType === AuthType.USE_OPENAI &&
credentials?.model &&
isProviderManagedModel(authType, credentials.model)
) {
onAuthError(
t(
'Model "{{modelName}}" is managed via settings.modelProviders. Please complete the fields in settings, or use another model id.',
{ modelName: credentials.model },
),
);
return;
}
setPendingAuthType(authType);
setAuthError(null);
setIsAuthDialogOpen(false);
@@ -218,33 +175,19 @@ export const useAuthCommand = (
if (authType === AuthType.USE_OPENAI) {
if (credentials) {
// Pass settings.model.generationConfig to updateCredentials so it can be merged
// after clearing provider-sourced config. This ensures settings.json generationConfig
// fields (e.g., samplingParams, timeout) are preserved.
const settingsGenerationConfig = settings.merged.model
?.generationConfig as Partial<ContentGeneratorConfig> | undefined;
config.updateCredentials(
{
apiKey: credentials.apiKey,
baseUrl: credentials.baseUrl,
model: credentials.model,
},
settingsGenerationConfig,
);
await performAuth(authType, credentials);
config.updateCredentials({
apiKey: credentials.apiKey,
baseUrl: credentials.baseUrl,
model: credentials.model,
});
await performAuth(authType, scope, credentials);
}
return;
}
await performAuth(authType);
await performAuth(authType, scope);
},
[
config,
performAuth,
isProviderManagedModel,
onAuthError,
settings.merged.model?.generationConfig,
],
[config, performAuth],
);
const openAuthDialog = useCallback(() => {

View File

@@ -54,7 +54,9 @@ describe('directoryCommand', () => {
services: {
config: mockConfig,
settings: {
merged: {},
merged: {
memoryDiscoveryMaxDirs: 1000,
},
},
},
ui: {

View File

@@ -119,6 +119,8 @@ export const directoryCommand: SlashCommand = {
config.getFolderTrust(),
context.services.settings.merged.context?.importFormat ||
'tree', // Use setting or default to 'tree'
config.getFileFilteringOptions(),
context.services.settings.merged.context?.discoveryMaxDirs,
);
config.setUserMemory(memoryContent);
config.setGeminiMdFileCount(fileCount);

View File

@@ -11,14 +11,9 @@ import type { SlashCommand, type CommandContext } from './types.js';
import { createMockCommandContext } from '../../test-utils/mockCommandContext.js';
import { MessageType } from '../types.js';
import type { LoadedSettings } from '../../config/settings.js';
import { readFile } from 'node:fs/promises';
import os from 'node:os';
import path from 'node:path';
import {
getErrorMessage,
loadServerHierarchicalMemory,
QWEN_DIR,
setGeminiMdFilename,
type FileDiscoveryService,
type LoadServerHierarchicalMemoryResponse,
} from '@qwen-code/qwen-code-core';
@@ -36,18 +31,7 @@ vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => {
};
});
vi.mock('node:fs/promises', () => {
const readFile = vi.fn();
return {
readFile,
default: {
readFile,
},
};
});
const mockLoadServerHierarchicalMemory = loadServerHierarchicalMemory as Mock;
const mockReadFile = readFile as unknown as Mock;
describe('memoryCommand', () => {
let mockContext: CommandContext;
@@ -68,10 +52,6 @@ describe('memoryCommand', () => {
let mockGetGeminiMdFileCount: Mock;
beforeEach(() => {
setGeminiMdFilename('QWEN.md');
mockReadFile.mockReset();
vi.restoreAllMocks();
showCommand = getSubCommand('show');
mockGetUserMemory = vi.fn();
@@ -122,52 +102,6 @@ describe('memoryCommand', () => {
expect.any(Number),
);
});
it('should show project memory from the configured context file', async () => {
const projectCommand = showCommand.subCommands?.find(
(cmd) => cmd.name === '--project',
);
if (!projectCommand?.action) throw new Error('Command has no action');
setGeminiMdFilename('AGENTS.md');
vi.spyOn(process, 'cwd').mockReturnValue('/test/project');
mockReadFile.mockResolvedValue('project memory');
await projectCommand.action(mockContext, '');
const expectedProjectPath = path.join('/test/project', 'AGENTS.md');
expect(mockReadFile).toHaveBeenCalledWith(expectedProjectPath, 'utf-8');
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
{
type: MessageType.INFO,
text: expect.stringContaining(expectedProjectPath),
},
expect.any(Number),
);
});
it('should show global memory from the configured context file', async () => {
const globalCommand = showCommand.subCommands?.find(
(cmd) => cmd.name === '--global',
);
if (!globalCommand?.action) throw new Error('Command has no action');
setGeminiMdFilename('AGENTS.md');
vi.spyOn(os, 'homedir').mockReturnValue('/home/user');
mockReadFile.mockResolvedValue('global memory');
await globalCommand.action(mockContext, '');
const expectedGlobalPath = path.join('/home/user', QWEN_DIR, 'AGENTS.md');
expect(mockReadFile).toHaveBeenCalledWith(expectedGlobalPath, 'utf-8');
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
{
type: MessageType.INFO,
text: expect.stringContaining('Global memory content'),
},
expect.any(Number),
);
});
});
describe('/memory add', () => {
@@ -299,7 +233,9 @@ describe('memoryCommand', () => {
services: {
config: mockConfig,
settings: {
merged: {},
merged: {
memoryDiscoveryMaxDirs: 1000,
},
} as LoadedSettings,
},
});

View File

@@ -6,13 +6,12 @@
import {
getErrorMessage,
getCurrentGeminiMdFilename,
loadServerHierarchicalMemory,
QWEN_DIR,
} from '@qwen-code/qwen-code-core';
import path from 'node:path';
import os from 'node:os';
import fs from 'node:fs/promises';
import os from 'os';
import fs from 'fs/promises';
import { MessageType } from '../types.js';
import type { SlashCommand, SlashCommandActionReturn } from './types.js';
import { CommandKind } from './types.js';
@@ -57,12 +56,7 @@ export const memoryCommand: SlashCommand = {
kind: CommandKind.BUILT_IN,
action: async (context) => {
try {
const workingDir =
context.services.config?.getWorkingDir?.() ?? process.cwd();
const projectMemoryPath = path.join(
workingDir,
getCurrentGeminiMdFilename(),
);
const projectMemoryPath = path.join(process.cwd(), 'QWEN.md');
const memoryContent = await fs.readFile(
projectMemoryPath,
'utf-8',
@@ -110,7 +104,7 @@ export const memoryCommand: SlashCommand = {
const globalMemoryPath = path.join(
os.homedir(),
QWEN_DIR,
getCurrentGeminiMdFilename(),
'QWEN.md',
);
const globalMemoryContent = await fs.readFile(
globalMemoryPath,
@@ -315,6 +309,8 @@ export const memoryCommand: SlashCommand = {
config.getFolderTrust(),
context.services.settings.merged.context?.importFormat ||
'tree', // Use setting or default to 'tree'
config.getFileFilteringOptions(),
context.services.settings.merged.context?.discoveryMaxDirs,
);
config.setUserMemory(memoryContent);
config.setGeminiMdFileCount(fileCount);

View File

@@ -13,6 +13,12 @@ import {
type ContentGeneratorConfig,
type Config,
} from '@qwen-code/qwen-code-core';
import * as availableModelsModule from '../models/availableModels.js';
// Mock the availableModels module
vi.mock('../models/availableModels.js', () => ({
getAvailableModelsForAuthType: vi.fn(),
}));
// Helper function to create a mock config
function createMockConfig(
@@ -25,6 +31,9 @@ function createMockConfig(
describe('modelCommand', () => {
let mockContext: CommandContext;
const mockGetAvailableModelsForAuthType = vi.mocked(
availableModelsModule.getAvailableModelsForAuthType,
);
beforeEach(() => {
mockContext = createMockCommandContext();
@@ -78,6 +87,10 @@ describe('modelCommand', () => {
});
it('should return dialog action for QWEN_OAUTH auth type', async () => {
mockGetAvailableModelsForAuthType.mockReturnValue([
{ id: 'qwen3-coder-plus', label: 'qwen3-coder-plus' },
]);
const mockConfig = createMockConfig({
model: 'test-model',
authType: AuthType.QWEN_OAUTH,
@@ -92,7 +105,11 @@ describe('modelCommand', () => {
});
});
it('should return dialog action for USE_OPENAI auth type', async () => {
it('should return dialog action for USE_OPENAI auth type when model is available', async () => {
mockGetAvailableModelsForAuthType.mockReturnValue([
{ id: 'gpt-4', label: 'gpt-4' },
]);
const mockConfig = createMockConfig({
model: 'test-model',
authType: AuthType.USE_OPENAI,
@@ -107,7 +124,28 @@ describe('modelCommand', () => {
});
});
it('should return dialog action for unsupported auth types', async () => {
it('should return error for USE_OPENAI auth type when no model is available', async () => {
mockGetAvailableModelsForAuthType.mockReturnValue([]);
const mockConfig = createMockConfig({
model: 'test-model',
authType: AuthType.USE_OPENAI,
});
mockContext.services.config = mockConfig as Config;
const result = await modelCommand.action!(mockContext, '');
expect(result).toEqual({
type: 'message',
messageType: 'error',
content:
'No models available for the current authentication type (openai).',
});
});
it('should return error for unsupported auth types', async () => {
mockGetAvailableModelsForAuthType.mockReturnValue([]);
const mockConfig = createMockConfig({
model: 'test-model',
authType: 'UNSUPPORTED_AUTH_TYPE' as AuthType,
@@ -117,8 +155,10 @@ describe('modelCommand', () => {
const result = await modelCommand.action!(mockContext, '');
expect(result).toEqual({
type: 'dialog',
dialog: 'model',
type: 'message',
messageType: 'error',
content:
'No models available for the current authentication type (UNSUPPORTED_AUTH_TYPE).',
});
});

View File

@@ -11,6 +11,7 @@ import type {
MessageActionReturn,
} from './types.js';
import { CommandKind } from './types.js';
import { getAvailableModelsForAuthType } from '../models/availableModels.js';
import { t } from '../../i18n/index.js';
export const modelCommand: SlashCommand = {
@@ -29,7 +30,7 @@ export const modelCommand: SlashCommand = {
return {
type: 'message',
messageType: 'error',
content: t('Configuration not available.'),
content: 'Configuration not available.',
};
}
@@ -51,6 +52,22 @@ export const modelCommand: SlashCommand = {
};
}
const availableModels = getAvailableModelsForAuthType(authType);
if (availableModels.length === 0) {
return {
type: 'message',
messageType: 'error',
content: t(
'No models available for the current authentication type ({{authType}}).',
{
authType,
},
),
};
}
// Trigger model selection dialog
return {
type: 'dialog',
dialog: 'model',

View File

@@ -1,132 +0,0 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
import {
CommandKind,
type CommandCompletionItem,
type CommandContext,
type SlashCommand,
} from './types.js';
import { MessageType, type HistoryItemSkillsList } from '../types.js';
import { t } from '../../i18n/index.js';
import { AsyncFzf } from 'fzf';
import type { SkillConfig } from '@qwen-code/qwen-code-core';
export const skillsCommand: SlashCommand = {
name: 'skills',
get description() {
return t('List available skills.');
},
kind: CommandKind.BUILT_IN,
action: async (context: CommandContext, args?: string) => {
const rawArgs = args?.trim() ?? '';
const [skillName = ''] = rawArgs.split(/\s+/);
const skillManager = context.services.config?.getSkillManager();
if (!skillManager) {
context.ui.addItem(
{
type: MessageType.ERROR,
text: t('Could not retrieve skill manager.'),
},
Date.now(),
);
return;
}
const skills = await skillManager.listSkills();
if (skills.length === 0) {
context.ui.addItem(
{
type: MessageType.INFO,
text: t('No skills are currently available.'),
},
Date.now(),
);
return;
}
if (!skillName) {
const sortedSkills = [...skills].sort((left, right) =>
left.name.localeCompare(right.name),
);
const skillsListItem: HistoryItemSkillsList = {
type: MessageType.SKILLS_LIST,
skills: sortedSkills.map((skill) => ({ name: skill.name })),
};
context.ui.addItem(skillsListItem, Date.now());
return;
}
const normalizedName = skillName.toLowerCase();
const hasSkill = skills.some(
(skill) => skill.name.toLowerCase() === normalizedName,
);
if (!hasSkill) {
context.ui.addItem(
{
type: MessageType.ERROR,
text: t('Unknown skill: {{name}}', { name: skillName }),
},
Date.now(),
);
return;
}
const rawInput = context.invocation?.raw ?? `/skills ${rawArgs}`;
return {
type: 'submit_prompt',
content: [{ text: rawInput }],
};
},
completion: async (
context: CommandContext,
partialArg: string,
): Promise<CommandCompletionItem[]> => {
const skillManager = context.services.config?.getSkillManager();
if (!skillManager) {
return [];
}
const skills = await skillManager.listSkills();
const normalizedPartial = partialArg.trim();
const matches = await getSkillMatches(skills, normalizedPartial);
return matches.map((skill) => ({
value: skill.name,
description: skill.description,
}));
},
};
async function getSkillMatches(
skills: SkillConfig[],
query: string,
): Promise<SkillConfig[]> {
if (!query) {
return skills;
}
const names = skills.map((skill) => skill.name);
const skillMap = new Map(skills.map((skill) => [skill.name, skill]));
try {
const fzf = new AsyncFzf(names, {
fuzzy: 'v2',
casing: 'case-insensitive',
});
const results = (await fzf.find(query)) as Array<{ item: string }>;
return results
.map((result) => skillMap.get(result.item))
.filter((skill): skill is SkillConfig => !!skill);
} catch (error) {
console.error('[skillsCommand] Fuzzy match failed:', error);
const lowerQuery = query.toLowerCase();
return skills.filter((skill) =>
skill.name.toLowerCase().startsWith(lowerQuery),
);
}
}

View File

@@ -209,12 +209,6 @@ export enum CommandKind {
MCP_PROMPT = 'mcp-prompt',
}
export interface CommandCompletionItem {
value: string;
label?: string;
description?: string;
}
// The standardized contract for any command in the system.
export interface SlashCommand {
name: string;
@@ -240,7 +234,7 @@ export interface SlashCommand {
completion?: (
context: CommandContext,
partialArg: string,
) => Promise<Array<string | CommandCompletionItem> | null>;
) => Promise<string[]>;
subCommands?: SlashCommand[];
}

View File

@@ -54,7 +54,7 @@ export function ApprovalModeDialog({
}: ApprovalModeDialogProps): React.JSX.Element {
// Start with User scope by default
const [selectedScope, setSelectedScope] = useState<SettingScope>(
SettingScope.Workspace,
SettingScope.User,
);
// Track the currently highlighted approval mode

View File

@@ -26,7 +26,6 @@ import { useSettings } from '../contexts/SettingsContext.js';
import { ApprovalMode } from '@qwen-code/qwen-code-core';
import { StreamingState } from '../types.js';
import { ConfigInitDisplay } from '../components/ConfigInitDisplay.js';
import { FeedbackDialog } from '../FeedbackDialog.js';
import { t } from '../../i18n/index.js';
export const Composer = () => {
@@ -135,8 +134,6 @@ export const Composer = () => {
</OverflowProvider>
)}
{uiState.isFeedbackDialogOpen && <FeedbackDialog />}
{uiState.isInputActive && (
<InputPrompt
buffer={uiState.buffer}

View File

@@ -25,6 +25,7 @@ import { useUIState } from '../contexts/UIStateContext.js';
import { useUIActions } from '../contexts/UIActionsContext.js';
import { useConfig } from '../contexts/ConfigContext.js';
import { useSettings } from '../contexts/SettingsContext.js';
import { SettingScope } from '../../config/settings.js';
import { AuthState } from '../types.js';
import { AuthType } from '@qwen-code/qwen-code-core';
import process from 'node:process';
@@ -201,7 +202,7 @@ export const DialogManager = ({
return (
<OpenAIKeyPrompt
onSubmit={(apiKey, baseUrl, model) => {
uiActions.handleAuthSelect(AuthType.USE_OPENAI, {
uiActions.handleAuthSelect(AuthType.USE_OPENAI, SettingScope.User, {
apiKey,
baseUrl,
model,

View File

@@ -30,7 +30,6 @@ import { Help } from './Help.js';
import type { SlashCommand } from '../commands/types.js';
import { ExtensionsList } from './views/ExtensionsList.js';
import { getMCPServerStatus } from '@qwen-code/qwen-code-core';
import { SkillsList } from './views/SkillsList.js';
import { ToolsList } from './views/ToolsList.js';
import { McpStatus } from './views/McpStatus.js';
@@ -154,9 +153,6 @@ const HistoryItemDisplayComponent: React.FC<HistoryItemDisplayProps> = ({
showDescriptions={itemForDisplay.showDescriptions}
/>
)}
{itemForDisplay.type === 'skills_list' && (
<SkillsList skills={itemForDisplay.skills} />
)}
{itemForDisplay.type === 'mcp_status' && (
<McpStatus {...itemForDisplay} serverStatus={getMCPServerStatus} />
)}

View File

@@ -33,9 +33,6 @@ vi.mock('../hooks/useCommandCompletion.js');
vi.mock('../hooks/useInputHistory.js');
vi.mock('../hooks/useReverseSearchCompletion.js');
vi.mock('../utils/clipboardUtils.js');
vi.mock('../contexts/UIStateContext.js', () => ({
useUIState: vi.fn(() => ({ isFeedbackDialogOpen: false })),
}));
const mockSlashCommands: SlashCommand[] = [
{
@@ -281,7 +278,7 @@ describe('InputPrompt', () => {
unmount();
});
it('should call completion.navigateUp for up arrow when suggestions are showing', async () => {
it('should call completion.navigateUp for both up arrow and Ctrl+P when suggestions are showing', async () => {
mockedUseCommandCompletion.mockReturnValue({
...mockCommandCompletion,
showSuggestions: true,
@@ -296,22 +293,19 @@ describe('InputPrompt', () => {
const { stdin, unmount } = renderWithProviders(<InputPrompt {...props} />);
await wait();
// Test up arrow for completion navigation
// Test up arrow
stdin.write('\u001B[A'); // Up arrow
await wait();
expect(mockCommandCompletion.navigateUp).toHaveBeenCalledTimes(1);
expect(mockCommandCompletion.navigateDown).not.toHaveBeenCalled();
// Ctrl+P should navigate history, not completion
stdin.write('\u0010'); // Ctrl+P
await wait();
expect(mockCommandCompletion.navigateUp).toHaveBeenCalledTimes(1);
expect(mockInputHistory.navigateUp).toHaveBeenCalled();
expect(mockCommandCompletion.navigateUp).toHaveBeenCalledTimes(2);
expect(mockCommandCompletion.navigateDown).not.toHaveBeenCalled();
unmount();
});
it('should call completion.navigateDown for down arrow when suggestions are showing', async () => {
it('should call completion.navigateDown for both down arrow and Ctrl+N when suggestions are showing', async () => {
mockedUseCommandCompletion.mockReturnValue({
...mockCommandCompletion,
showSuggestions: true,
@@ -325,17 +319,14 @@ describe('InputPrompt', () => {
const { stdin, unmount } = renderWithProviders(<InputPrompt {...props} />);
await wait();
// Test down arrow for completion navigation
// Test down arrow
stdin.write('\u001B[B'); // Down arrow
await wait();
expect(mockCommandCompletion.navigateDown).toHaveBeenCalledTimes(1);
expect(mockCommandCompletion.navigateUp).not.toHaveBeenCalled();
// Ctrl+N should navigate history, not completion
stdin.write('\u000E'); // Ctrl+N
await wait();
expect(mockCommandCompletion.navigateDown).toHaveBeenCalledTimes(1);
expect(mockInputHistory.navigateDown).toHaveBeenCalled();
expect(mockCommandCompletion.navigateDown).toHaveBeenCalledTimes(2);
expect(mockCommandCompletion.navigateUp).not.toHaveBeenCalled();
unmount();
});
@@ -773,8 +764,6 @@ describe('InputPrompt', () => {
mockCommandContext,
false,
expect.any(Object),
// active parameter: completion enabled when not just navigated history
true,
);
unmount();
@@ -802,8 +791,6 @@ describe('InputPrompt', () => {
mockCommandContext,
false,
expect.any(Object),
// active parameter: completion enabled when not just navigated history
true,
);
unmount();
@@ -831,8 +818,6 @@ describe('InputPrompt', () => {
mockCommandContext,
false,
expect.any(Object),
// active parameter: completion enabled when not just navigated history
true,
);
unmount();
@@ -860,8 +845,6 @@ describe('InputPrompt', () => {
mockCommandContext,
false,
expect.any(Object),
// active parameter: completion enabled when not just navigated history
true,
);
unmount();
@@ -889,8 +872,6 @@ describe('InputPrompt', () => {
mockCommandContext,
false,
expect.any(Object),
// active parameter: completion enabled when not just navigated history
true,
);
unmount();
@@ -919,8 +900,6 @@ describe('InputPrompt', () => {
mockCommandContext,
false,
expect.any(Object),
// active parameter: completion enabled when not just navigated history
true,
);
unmount();
@@ -948,8 +927,6 @@ describe('InputPrompt', () => {
mockCommandContext,
false,
expect.any(Object),
// active parameter: completion enabled when not just navigated history
true,
);
unmount();
@@ -978,8 +955,6 @@ describe('InputPrompt', () => {
mockCommandContext,
false,
expect.any(Object),
// active parameter: completion enabled when not just navigated history
true,
);
unmount();
@@ -1008,8 +983,6 @@ describe('InputPrompt', () => {
mockCommandContext,
false,
expect.any(Object),
// active parameter: completion enabled when not just navigated history
true,
);
unmount();
@@ -1038,8 +1011,6 @@ describe('InputPrompt', () => {
mockCommandContext,
false,
expect.any(Object),
// active parameter: completion enabled when not just navigated history
true,
);
unmount();
@@ -1068,8 +1039,6 @@ describe('InputPrompt', () => {
mockCommandContext,
false,
expect.any(Object),
// active parameter: completion enabled when not just navigated history
true,
);
unmount();
@@ -1100,8 +1069,6 @@ describe('InputPrompt', () => {
mockCommandContext,
false,
expect.any(Object),
// active parameter: completion enabled when not just navigated history
true,
);
unmount();
@@ -1130,8 +1097,6 @@ describe('InputPrompt', () => {
mockCommandContext,
false,
expect.any(Object),
// active parameter: completion enabled when not just navigated history
true,
);
unmount();
@@ -1162,8 +1127,6 @@ describe('InputPrompt', () => {
mockCommandContext,
false,
expect.any(Object),
// active parameter: completion enabled when not just navigated history
true,
);
unmount();

View File

@@ -36,8 +36,6 @@ import {
import * as path from 'node:path';
import { SCREEN_READER_USER_PREFIX } from '../textConstants.js';
import { useShellFocusState } from '../contexts/ShellFocusContext.js';
import { useUIState } from '../contexts/UIStateContext.js';
import { FEEDBACK_DIALOG_KEYS } from '../FeedbackDialog.js';
export interface InputPromptProps {
buffer: TextBuffer;
onSubmit: (value: string) => void;
@@ -102,7 +100,6 @@ export const InputPrompt: React.FC<InputPromptProps> = ({
isEmbeddedShellFocused,
}) => {
const isShellFocused = useShellFocusState();
const uiState = useUIState();
const [justNavigatedHistory, setJustNavigatedHistory] = useState(false);
const [escPressCount, setEscPressCount] = useState(0);
const [showEscapePrompt, setShowEscapePrompt] = useState(false);
@@ -138,8 +135,6 @@ export const InputPrompt: React.FC<InputPromptProps> = ({
commandContext,
reverseSearchActive,
config,
// Suppress completion when history navigation just occurred
!justNavigatedHistory,
);
const reverseSearchCompletion = useReverseSearchCompletion(
@@ -224,9 +219,9 @@ export const InputPrompt: React.FC<InputPromptProps> = ({
const inputHistory = useInputHistory({
userMessages,
onSubmit: handleSubmitAndClear,
// History navigation (Ctrl+P/N) now always works since completion navigation
// only uses arrow keys. Only disable in shell mode.
isActive: !shellModeActive,
isActive:
(!completion.showSuggestions || completion.suggestions.length === 1) &&
!shellModeActive,
currentQuery: buffer.text,
onChange: customSetTextAndResetCompletionSignal,
});
@@ -331,14 +326,6 @@ export const InputPrompt: React.FC<InputPromptProps> = ({
return;
}
// Intercept feedback dialog option keys (1, 2) when dialog is open
if (
uiState.isFeedbackDialogOpen &&
(FEEDBACK_DIALOG_KEYS as readonly string[]).includes(key.name)
) {
return;
}
// Reset ESC count and hide prompt on any non-ESC key
if (key.name !== 'escape') {
if (escPressCount > 0 || showEscapePrompt) {
@@ -683,7 +670,6 @@ export const InputPrompt: React.FC<InputPromptProps> = ({
recentPasteTime,
commandSearchActive,
commandSearchCompletion,
uiState,
],
);

View File

@@ -10,11 +10,7 @@ import { ModelDialog } from './ModelDialog.js';
import { useKeypress } from '../hooks/useKeypress.js';
import { DescriptiveRadioButtonSelect } from './shared/DescriptiveRadioButtonSelect.js';
import { ConfigContext } from '../contexts/ConfigContext.js';
import { SettingsContext } from '../contexts/SettingsContext.js';
import type { Config } from '@qwen-code/qwen-code-core';
import { AuthType } from '@qwen-code/qwen-code-core';
import type { LoadedSettings } from '../../config/settings.js';
import { SettingScope } from '../../config/settings.js';
import {
AVAILABLE_MODELS_QWEN,
MAINLINE_CODER,
@@ -40,29 +36,18 @@ const renderComponent = (
};
const combinedProps = { ...defaultProps, ...props };
const mockSettings = {
isTrusted: true,
user: { settings: {} },
workspace: { settings: {} },
setValue: vi.fn(),
} as unknown as LoadedSettings;
const mockConfig = contextValue
? ({
// --- Functions used by ModelDialog ---
getModel: vi.fn(() => MAINLINE_CODER),
setModel: vi.fn().mockResolvedValue(undefined),
switchModel: vi.fn().mockResolvedValue(undefined),
setModel: vi.fn(),
getAuthType: vi.fn(() => 'qwen-oauth'),
// --- Functions used by ClearcutLogger ---
getUsageStatisticsEnabled: vi.fn(() => true),
getSessionId: vi.fn(() => 'mock-session-id'),
getDebugMode: vi.fn(() => false),
getContentGeneratorConfig: vi.fn(() => ({
authType: AuthType.QWEN_OAUTH,
model: MAINLINE_CODER,
})),
getContentGeneratorConfig: vi.fn(() => ({ authType: 'mock' })),
getUseSmartEdit: vi.fn(() => false),
getUseModelRouter: vi.fn(() => false),
getProxy: vi.fn(() => undefined),
@@ -73,27 +58,21 @@ const renderComponent = (
: undefined;
const renderResult = render(
<SettingsContext.Provider value={mockSettings}>
<ConfigContext.Provider value={mockConfig}>
<ModelDialog {...combinedProps} />
</ConfigContext.Provider>
</SettingsContext.Provider>,
<ConfigContext.Provider value={mockConfig}>
<ModelDialog {...combinedProps} />
</ConfigContext.Provider>,
);
return {
...renderResult,
props: combinedProps,
mockConfig,
mockSettings,
};
};
describe('<ModelDialog />', () => {
beforeEach(() => {
vi.clearAllMocks();
// Ensure env-based fallback models don't leak into this suite from the developer environment.
delete process.env['OPENAI_MODEL'];
delete process.env['ANTHROPIC_MODEL'];
});
afterEach(() => {
@@ -112,12 +91,8 @@ describe('<ModelDialog />', () => {
const props = mockedSelect.mock.calls[0][0];
expect(props.items).toHaveLength(AVAILABLE_MODELS_QWEN.length);
expect(props.items[0].value).toBe(
`${AuthType.QWEN_OAUTH}::${MAINLINE_CODER}`,
);
expect(props.items[1].value).toBe(
`${AuthType.QWEN_OAUTH}::${MAINLINE_VLM}`,
);
expect(props.items[0].value).toBe(MAINLINE_CODER);
expect(props.items[1].value).toBe(MAINLINE_VLM);
expect(props.showNumbers).toBe(true);
});
@@ -164,93 +139,16 @@ describe('<ModelDialog />', () => {
expect(mockedSelect).toHaveBeenCalledTimes(1);
});
it('calls config.switchModel and onClose when DescriptiveRadioButtonSelect.onSelect is triggered', async () => {
const { props, mockConfig, mockSettings } = renderComponent({}, {}); // Pass empty object for contextValue
it('calls config.setModel and onClose when DescriptiveRadioButtonSelect.onSelect is triggered', () => {
const { props, mockConfig } = renderComponent({}, {}); // Pass empty object for contextValue
const childOnSelect = mockedSelect.mock.calls[0][0].onSelect;
expect(childOnSelect).toBeDefined();
await childOnSelect(`${AuthType.QWEN_OAUTH}::${MAINLINE_CODER}`);
childOnSelect(MAINLINE_CODER);
expect(mockConfig?.switchModel).toHaveBeenCalledWith(
AuthType.QWEN_OAUTH,
MAINLINE_CODER,
undefined,
{
reason: 'user_manual',
context: 'Model switched via /model dialog',
},
);
expect(mockSettings.setValue).toHaveBeenCalledWith(
SettingScope.User,
'model.name',
MAINLINE_CODER,
);
expect(mockSettings.setValue).toHaveBeenCalledWith(
SettingScope.User,
'security.auth.selectedType',
AuthType.QWEN_OAUTH,
);
expect(props.onClose).toHaveBeenCalledTimes(1);
});
it('calls config.switchModel and persists authType+model when selecting a different authType', async () => {
const switchModel = vi.fn().mockResolvedValue(undefined);
const getAuthType = vi.fn(() => AuthType.USE_OPENAI);
const getAvailableModelsForAuthType = vi.fn((t: AuthType) => {
if (t === AuthType.USE_OPENAI) {
return [{ id: 'gpt-4', label: 'GPT-4', authType: t }];
}
if (t === AuthType.QWEN_OAUTH) {
return AVAILABLE_MODELS_QWEN.map((m) => ({
id: m.id,
label: m.label,
authType: AuthType.QWEN_OAUTH,
}));
}
return [];
});
const mockConfigWithSwitchAuthType = {
getAuthType,
getModel: vi.fn(() => 'gpt-4'),
getContentGeneratorConfig: vi.fn(() => ({
authType: AuthType.QWEN_OAUTH,
model: MAINLINE_CODER,
})),
// Add switchModel to the mock object (not the type)
switchModel,
getAvailableModelsForAuthType,
};
const { props, mockSettings } = renderComponent(
{},
// Cast to Config to bypass type checking, matching the runtime behavior
mockConfigWithSwitchAuthType as unknown as Partial<Config>,
);
const childOnSelect = mockedSelect.mock.calls[0][0].onSelect;
await childOnSelect(`${AuthType.QWEN_OAUTH}::${MAINLINE_CODER}`);
expect(switchModel).toHaveBeenCalledWith(
AuthType.QWEN_OAUTH,
MAINLINE_CODER,
{ requireCachedCredentials: true },
{
reason: 'user_manual',
context: 'AuthType+model switched via /model dialog',
},
);
expect(mockSettings.setValue).toHaveBeenCalledWith(
SettingScope.User,
'model.name',
MAINLINE_CODER,
);
expect(mockSettings.setValue).toHaveBeenCalledWith(
SettingScope.User,
'security.auth.selectedType',
AuthType.QWEN_OAUTH,
);
// Assert against the default mock provided by renderComponent
expect(mockConfig?.setModel).toHaveBeenCalledWith(MAINLINE_CODER);
expect(props.onClose).toHaveBeenCalledTimes(1);
});
@@ -295,25 +193,17 @@ describe('<ModelDialog />', () => {
it('updates initialIndex when config context changes', () => {
const mockGetModel = vi.fn(() => MAINLINE_CODER);
const mockGetAuthType = vi.fn(() => 'qwen-oauth');
const mockSettings = {
isTrusted: true,
user: { settings: {} },
workspace: { settings: {} },
setValue: vi.fn(),
} as unknown as LoadedSettings;
const { rerender } = render(
<SettingsContext.Provider value={mockSettings}>
<ConfigContext.Provider
value={
{
getModel: mockGetModel,
getAuthType: mockGetAuthType,
} as unknown as Config
}
>
<ModelDialog onClose={vi.fn()} />
</ConfigContext.Provider>
</SettingsContext.Provider>,
<ConfigContext.Provider
value={
{
getModel: mockGetModel,
getAuthType: mockGetAuthType,
} as unknown as Config
}
>
<ModelDialog onClose={vi.fn()} />
</ConfigContext.Provider>,
);
expect(mockedSelect.mock.calls[0][0].initialIndex).toBe(0);
@@ -325,11 +215,9 @@ describe('<ModelDialog />', () => {
} as unknown as Config;
rerender(
<SettingsContext.Provider value={mockSettings}>
<ConfigContext.Provider value={newMockConfig}>
<ModelDialog onClose={vi.fn()} />
</ConfigContext.Provider>
</SettingsContext.Provider>,
<ConfigContext.Provider value={newMockConfig}>
<ModelDialog onClose={vi.fn()} />
</ConfigContext.Provider>,
);
// Should be called at least twice: initial render + re-render after context change

View File

@@ -5,210 +5,52 @@
*/
import type React from 'react';
import { useCallback, useContext, useMemo, useState } from 'react';
import { useCallback, useContext, useMemo } from 'react';
import { Box, Text } from 'ink';
import {
AuthType,
ModelSlashCommandEvent,
logModelSlashCommand,
type ContentGeneratorConfig,
type ContentGeneratorConfigSource,
type ContentGeneratorConfigSources,
} from '@qwen-code/qwen-code-core';
import { useKeypress } from '../hooks/useKeypress.js';
import { theme } from '../semantic-colors.js';
import { DescriptiveRadioButtonSelect } from './shared/DescriptiveRadioButtonSelect.js';
import { ConfigContext } from '../contexts/ConfigContext.js';
import { UIStateContext } from '../contexts/UIStateContext.js';
import { useSettings } from '../contexts/SettingsContext.js';
import {
getAvailableModelsForAuthType,
MAINLINE_CODER,
} from '../models/availableModels.js';
import { getPersistScopeForModelSelection } from '../../config/modelProvidersScope.js';
import { t } from '../../i18n/index.js';
interface ModelDialogProps {
onClose: () => void;
}
function formatSourceBadge(
source: ContentGeneratorConfigSource | undefined,
): string | undefined {
if (!source) return undefined;
switch (source.kind) {
case 'cli':
return source.detail ? `CLI ${source.detail}` : 'CLI';
case 'env':
return source.envKey ? `ENV ${source.envKey}` : 'ENV';
case 'settings':
return source.settingsPath
? `Settings ${source.settingsPath}`
: 'Settings';
case 'modelProviders': {
const suffix =
source.authType && source.modelId
? `${source.authType}:${source.modelId}`
: source.authType
? `${source.authType}`
: source.modelId
? `${source.modelId}`
: '';
return suffix ? `ModelProviders ${suffix}` : 'ModelProviders';
}
case 'default':
return source.detail ? `Default ${source.detail}` : 'Default';
case 'computed':
return source.detail ? `Computed ${source.detail}` : 'Computed';
case 'programmatic':
return source.detail ? `Programmatic ${source.detail}` : 'Programmatic';
case 'unknown':
default:
return undefined;
}
}
function readSourcesFromConfig(config: unknown): ContentGeneratorConfigSources {
if (!config) {
return {};
}
const maybe = config as {
getContentGeneratorConfigSources?: () => ContentGeneratorConfigSources;
};
return maybe.getContentGeneratorConfigSources?.() ?? {};
}
function maskApiKey(apiKey: string | undefined): string {
if (!apiKey) return '(not set)';
const trimmed = apiKey.trim();
if (trimmed.length === 0) return '(not set)';
if (trimmed.length <= 6) return '***';
const head = trimmed.slice(0, 3);
const tail = trimmed.slice(-4);
return `${head}${tail}`;
}
function persistModelSelection(
settings: ReturnType<typeof useSettings>,
modelId: string,
): void {
const scope = getPersistScopeForModelSelection(settings);
settings.setValue(scope, 'model.name', modelId);
}
function persistAuthTypeSelection(
settings: ReturnType<typeof useSettings>,
authType: AuthType,
): void {
const scope = getPersistScopeForModelSelection(settings);
settings.setValue(scope, 'security.auth.selectedType', authType);
}
function ConfigRow({
label,
value,
badge,
}: {
label: string;
value: React.ReactNode;
badge?: string;
}): React.JSX.Element {
return (
<Box flexDirection="column">
<Box>
<Box minWidth={12} flexShrink={0}>
<Text color={theme.text.secondary}>{label}:</Text>
</Box>
<Box flexGrow={1} flexDirection="row" flexWrap="wrap">
<Text>{value}</Text>
</Box>
</Box>
{badge ? (
<Box>
<Box minWidth={12} flexShrink={0}>
<Text> </Text>
</Box>
<Box flexGrow={1}>
<Text color={theme.text.secondary}>{badge}</Text>
</Box>
</Box>
) : null}
</Box>
);
}
export function ModelDialog({ onClose }: ModelDialogProps): React.JSX.Element {
const config = useContext(ConfigContext);
const uiState = useContext(UIStateContext);
const settings = useSettings();
// Local error state for displaying errors within the dialog
const [errorMessage, setErrorMessage] = useState<string | null>(null);
// Get auth type from config, default to QWEN_OAUTH if not available
const authType = config?.getAuthType() ?? AuthType.QWEN_OAUTH;
const authType = config?.getAuthType();
const effectiveConfig =
(config?.getContentGeneratorConfig?.() as
| ContentGeneratorConfig
| undefined) ?? undefined;
const sources = readSourcesFromConfig(config);
const availableModelEntries = useMemo(() => {
const allAuthTypes = Object.values(AuthType) as AuthType[];
const modelsByAuthType = allAuthTypes
.map((t) => ({
authType: t,
models: getAvailableModelsForAuthType(t, config ?? undefined),
}))
.filter((x) => x.models.length > 0);
// Fixed order: qwen-oauth first, then others in a stable order
const authTypeOrder: AuthType[] = [
AuthType.QWEN_OAUTH,
AuthType.USE_OPENAI,
AuthType.USE_ANTHROPIC,
AuthType.USE_GEMINI,
AuthType.USE_VERTEX_AI,
];
// Filter to only include authTypes that have models
const availableAuthTypes = new Set(modelsByAuthType.map((x) => x.authType));
const orderedAuthTypes = authTypeOrder.filter((t) =>
availableAuthTypes.has(t),
);
return orderedAuthTypes.flatMap((t) => {
const models =
modelsByAuthType.find((x) => x.authType === t)?.models ?? [];
return models.map((m) => ({ authType: t, model: m }));
});
}, [config]);
// Get available models based on auth type
const availableModels = useMemo(
() => getAvailableModelsForAuthType(authType),
[authType],
);
const MODEL_OPTIONS = useMemo(
() =>
availableModelEntries.map(({ authType: t2, model }) => {
const value = `${t2}::${model.id}`;
const title = (
<Text>
<Text bold color={theme.text.accent}>
[{t2}]
</Text>
<Text>{` ${model.label}`}</Text>
</Text>
);
const description = model.description || '';
return {
value,
title,
description,
key: value,
};
}),
[availableModelEntries],
availableModels.map((model) => ({
value: model.id,
title: model.label,
description: model.description || '',
key: model.id,
})),
[availableModels],
);
const preferredModelId = config?.getModel() || MAINLINE_CODER;
const preferredKey = authType ? `${authType}::${preferredModelId}` : '';
// Determine the Preferred Model (read once when the dialog opens).
const preferredModel = config?.getModel() || MAINLINE_CODER;
useKeypress(
(key) => {
@@ -219,83 +61,25 @@ export function ModelDialog({ onClose }: ModelDialogProps): React.JSX.Element {
{ isActive: true },
);
const initialIndex = useMemo(() => {
const index = MODEL_OPTIONS.findIndex(
(option) => option.value === preferredKey,
);
return index === -1 ? 0 : index;
}, [MODEL_OPTIONS, preferredKey]);
// Calculate the initial index based on the preferred model.
const initialIndex = useMemo(
() => MODEL_OPTIONS.findIndex((option) => option.value === preferredModel),
[MODEL_OPTIONS, preferredModel],
);
// Handle selection internally (Autonomous Dialog).
const handleSelect = useCallback(
async (selected: string) => {
// Clear any previous error
setErrorMessage(null);
const sep = '::';
const idx = selected.indexOf(sep);
const selectedAuthType = (
idx >= 0 ? selected.slice(0, idx) : authType
) as AuthType;
const modelId = idx >= 0 ? selected.slice(idx + sep.length) : selected;
(model: string) => {
if (config) {
try {
await config.switchModel(
selectedAuthType,
modelId,
selectedAuthType !== authType &&
selectedAuthType === AuthType.QWEN_OAUTH
? { requireCachedCredentials: true }
: undefined,
{
reason: 'user_manual',
context:
selectedAuthType === authType
? 'Model switched via /model dialog'
: 'AuthType+model switched via /model dialog',
},
);
} catch (e) {
const baseErrorMessage = e instanceof Error ? e.message : String(e);
setErrorMessage(
`Failed to switch model to '${modelId}'.\n\n${baseErrorMessage}`,
);
return;
}
const event = new ModelSlashCommandEvent(modelId);
config.setModel(model);
const event = new ModelSlashCommandEvent(model);
logModelSlashCommand(config, event);
const after = config.getContentGeneratorConfig?.() as
| ContentGeneratorConfig
| undefined;
const effectiveAuthType =
after?.authType ?? selectedAuthType ?? authType;
const effectiveModelId = after?.model ?? modelId;
persistModelSelection(settings, effectiveModelId);
persistAuthTypeSelection(settings, effectiveAuthType);
const baseUrl = after?.baseUrl ?? t('(default)');
const maskedKey = maskApiKey(after?.apiKey);
uiState?.historyManager.addItem(
{
type: 'info',
text:
`authType: ${effectiveAuthType}\n` +
`Using model: ${effectiveModelId}\n` +
`Base URL: ${baseUrl}\n` +
`API key: ${maskedKey}`,
},
Date.now(),
);
}
onClose();
},
[authType, config, onClose, settings, uiState, setErrorMessage],
[config, onClose],
);
const hasModels = MODEL_OPTIONS.length > 0;
return (
<Box
borderStyle="round"
@@ -305,73 +89,14 @@ export function ModelDialog({ onClose }: ModelDialogProps): React.JSX.Element {
width="100%"
>
<Text bold>{t('Select Model')}</Text>
<Box marginTop={1} flexDirection="column">
<Text color={theme.text.secondary}>
{t('Current (effective) configuration')}
</Text>
<Box flexDirection="column" marginTop={1}>
<ConfigRow label="AuthType" value={authType} />
<ConfigRow
label="Model"
value={effectiveConfig?.model ?? config?.getModel?.() ?? ''}
badge={formatSourceBadge(sources['model'])}
/>
{authType !== AuthType.QWEN_OAUTH && (
<>
<ConfigRow
label="Base URL"
value={effectiveConfig?.baseUrl ?? t('(default)')}
badge={formatSourceBadge(sources['baseUrl'])}
/>
<ConfigRow
label="API Key"
value={effectiveConfig?.apiKey ? t('(set)') : t('(not set)')}
badge={formatSourceBadge(sources['apiKey'])}
/>
</>
)}
</Box>
<Box marginTop={1}>
<DescriptiveRadioButtonSelect
items={MODEL_OPTIONS}
onSelect={handleSelect}
initialIndex={initialIndex}
showNumbers={true}
/>
</Box>
{!hasModels ? (
<Box marginTop={1} flexDirection="column">
<Text color={theme.status.warning}>
{t(
'No models available for the current authentication type ({{authType}}).',
{
authType: authType ? String(authType) : t('(none)'),
},
)}
</Text>
<Box marginTop={1}>
<Text color={theme.text.secondary}>
{t(
'Please configure models in settings.modelProviders or use environment variables.',
)}
</Text>
</Box>
</Box>
) : (
<Box marginTop={1}>
<DescriptiveRadioButtonSelect
items={MODEL_OPTIONS}
onSelect={handleSelect}
initialIndex={initialIndex}
showNumbers={true}
/>
</Box>
)}
{errorMessage && (
<Box marginTop={1} flexDirection="column" paddingX={1}>
<Text color={theme.status.error} wrap="wrap">
{errorMessage}
</Text>
</Box>
)}
<Box marginTop={1} flexDirection="column">
<Text color={theme.text.secondary}>{t('(Press Esc to close)')}</Text>
</Box>

View File

@@ -1331,7 +1331,9 @@ describe('SettingsDialog', () => {
truncateToolOutputThreshold: 50000,
truncateToolOutputLines: 1000,
},
context: {},
context: {
discoveryMaxDirs: 500,
},
model: {
maxSessionTurns: 100,
skipNextSpeakerCheck: false,
@@ -1464,6 +1466,7 @@ describe('SettingsDialog', () => {
disableFuzzySearch: true,
},
loadMemoryFromIncludeDirectories: true,
discoveryMaxDirs: 100,
},
});
const onSelect = vi.fn();

View File

@@ -106,7 +106,7 @@ export function SuggestionsDisplay({
</Box>
{suggestion.description && (
<Box flexGrow={1} paddingLeft={2}>
<Box flexGrow={1} paddingLeft={3}>
<Text color={textColor} wrap="truncate">
{suggestion.description}
</Text>

View File

@@ -23,7 +23,7 @@ export const InfoMessage: React.FC<InfoMessageProps> = ({ text }) => {
const prefixWidth = prefix.length;
return (
<Box flexDirection="row" marginBottom={1}>
<Box flexDirection="row" marginTop={1}>
<Box width={prefixWidth}>
<Text color={theme.status.warning}>{prefix}</Text>
</Box>

View File

@@ -18,7 +18,7 @@ export const WarningMessage: React.FC<WarningMessageProps> = ({ text }) => {
const prefixWidth = 3;
return (
<Box flexDirection="row" marginBottom={1}>
<Box flexDirection="row" marginTop={1}>
<Box width={prefixWidth}>
<Text color={Colors.AccentYellow}>{prefix}</Text>
</Box>

View File

@@ -11,7 +11,7 @@ import { BaseSelectionList } from './BaseSelectionList.js';
import type { SelectionListItem } from '../../hooks/useSelectionList.js';
export interface DescriptiveRadioSelectItem<T> extends SelectionListItem<T> {
title: React.ReactNode;
title: string;
description: string;
}

View File

@@ -1,36 +0,0 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
import type React from 'react';
import { Box, Text } from 'ink';
import { theme } from '../../semantic-colors.js';
import { type SkillDefinition } from '../../types.js';
import { t } from '../../../i18n/index.js';
interface SkillsListProps {
skills: readonly SkillDefinition[];
}
export const SkillsList: React.FC<SkillsListProps> = ({ skills }) => (
<Box flexDirection="column" marginBottom={1}>
<Text bold color={theme.text.primary}>
{t('Available skills:')}
</Text>
<Box height={1} />
{skills.length > 0 ? (
skills.map((skill) => (
<Box key={skill.name} flexDirection="row">
<Text color={theme.text.primary}>{' '}- </Text>
<Text bold color={theme.text.accent}>
{skill.name}
</Text>
</Box>
))
) : (
<Text color={theme.text.primary}> {t('No skills available')}</Text>
)}
</Box>
);

View File

@@ -30,6 +30,7 @@ export interface UIActions {
) => void;
handleAuthSelect: (
authType: AuthType | undefined,
scope: SettingScope,
credentials?: OpenAICredentials,
) => Promise<void>;
setAuthState: (state: AuthState) => void;
@@ -66,10 +67,6 @@ export interface UIActions {
openResumeDialog: () => void;
closeResumeDialog: () => void;
handleResume: (sessionId: string) => void;
// Feedback dialog
openFeedbackDialog: () => void;
closeFeedbackDialog: () => void;
submitFeedback: (rating: number) => void;
}
export const UIActionsContext = createContext<UIActions | null>(null);

View File

@@ -126,8 +126,6 @@ export interface UIState {
// Subagent dialogs
isSubagentCreateDialogOpen: boolean;
isAgentsManagerDialogOpen: boolean;
// Feedback dialog
isFeedbackDialogOpen: boolean;
}
export const UIStateContext = createContext<UIState | null>(null);

View File

@@ -45,8 +45,6 @@ export function useCommandCompletion(
commandContext: CommandContext,
reverseSearchActive: boolean = false,
config?: Config,
// When false, suppresses showing suggestions (e.g., after history navigation)
active: boolean = true,
): UseCommandCompletionReturn {
const {
suggestions,
@@ -154,11 +152,7 @@ export function useCommandCompletion(
}, [suggestions, setActiveSuggestionIndex, setVisibleStartIndex]);
useEffect(() => {
if (
completionMode === CompletionMode.IDLE ||
reverseSearchActive ||
!active
) {
if (completionMode === CompletionMode.IDLE || reverseSearchActive) {
resetCompletionState();
return;
}
@@ -169,7 +163,6 @@ export function useCommandCompletion(
suggestions.length,
isLoadingSuggestions,
reverseSearchActive,
active,
resetCompletionState,
setShowSuggestions,
]);

View File

@@ -25,6 +25,7 @@ export interface DialogCloseOptions {
isAuthDialogOpen: boolean;
handleAuthSelect: (
authType: AuthType | undefined,
scope: SettingScope,
credentials?: OpenAICredentials,
) => Promise<void>;
pendingAuthType: AuthType | undefined;

View File

@@ -1,178 +0,0 @@
import { useState, useCallback, useEffect } from 'react';
import * as fs from 'node:fs';
import {
type Config,
logUserFeedback,
UserFeedbackEvent,
type UserFeedbackRating,
isNodeError,
AuthType,
} from '@qwen-code/qwen-code-core';
import { StreamingState, MessageType, type HistoryItem } from '../types.js';
import {
SettingScope,
type LoadedSettings,
USER_SETTINGS_PATH,
} from '../../config/settings.js';
import type { SessionStatsState } from '../contexts/SessionContext.js';
import stripJsonComments from 'strip-json-comments';
const FEEDBACK_SHOW_PROBABILITY = 0.25; // 25% probability of showing feedback dialog
const MIN_TOOL_CALLS = 10; // Minimum tool calls to show feedback dialog
const MIN_USER_MESSAGES = 5; // Minimum user messages to show feedback dialog
// Fatigue mechanism constants
const FEEDBACK_COOLDOWN_HOURS = 24; // Hours to wait before showing feedback dialog again
/**
* Check if the last message in the conversation history is an AI response
*/
const lastMessageIsAIResponse = (history: HistoryItem[]): boolean =>
history.length > 0 && history[history.length - 1].type === MessageType.GEMINI;
/**
* Read feedbackLastShownTimestamp directly from the user settings file
*/
const getFeedbackLastShownTimestampFromFile = (): number => {
try {
if (fs.existsSync(USER_SETTINGS_PATH)) {
const content = fs.readFileSync(USER_SETTINGS_PATH, 'utf-8');
const settings = JSON.parse(stripJsonComments(content));
return settings?.ui?.feedbackLastShownTimestamp ?? 0;
}
} catch (error) {
if (isNodeError(error) && error.code !== 'ENOENT') {
console.warn(
'Failed to read feedbackLastShownTimestamp from settings file:',
error,
);
}
}
return 0;
};
/**
* Check if we should show the feedback dialog based on fatigue mechanism
*/
const shouldShowFeedbackBasedOnFatigue = (): boolean => {
const feedbackLastShownTimestamp = getFeedbackLastShownTimestampFromFile();
const now = Date.now();
const timeSinceLastShown = now - feedbackLastShownTimestamp;
const cooldownMs = FEEDBACK_COOLDOWN_HOURS * 60 * 60 * 1000;
return timeSinceLastShown >= cooldownMs;
};
/**
* Check if the session meets the minimum requirements for showing feedback
* Either tool calls > 10 OR user messages > 5
*/
const meetsMinimumSessionRequirements = (
sessionStats: SessionStatsState,
): boolean => {
const toolCallsCount = sessionStats.metrics.tools.totalCalls;
const userMessagesCount = sessionStats.promptCount;
return (
toolCallsCount > MIN_TOOL_CALLS || userMessagesCount > MIN_USER_MESSAGES
);
};
export interface UseFeedbackDialogProps {
config: Config;
settings: LoadedSettings;
streamingState: StreamingState;
history: HistoryItem[];
sessionStats: SessionStatsState;
}
export const useFeedbackDialog = ({
config,
settings,
streamingState,
history,
sessionStats,
}: UseFeedbackDialogProps) => {
// Feedback dialog state
const [isFeedbackDialogOpen, setIsFeedbackDialogOpen] = useState(false);
const openFeedbackDialog = useCallback(() => {
setIsFeedbackDialogOpen(true);
// Record the timestamp when feedback dialog is shown (fire and forget)
settings.setValue(
SettingScope.User,
'ui.feedbackLastShownTimestamp',
Date.now(),
);
}, [settings]);
const closeFeedbackDialog = useCallback(
() => setIsFeedbackDialogOpen(false),
[],
);
const submitFeedback = useCallback(
(rating: number) => {
// Create and log the feedback event
const feedbackEvent = new UserFeedbackEvent(
sessionStats.sessionId,
rating as UserFeedbackRating,
config.getModel(),
config.getApprovalMode(),
);
logUserFeedback(config, feedbackEvent);
closeFeedbackDialog();
},
[config, sessionStats, closeFeedbackDialog],
);
useEffect(() => {
const checkAndShowFeedback = () => {
if (streamingState === StreamingState.Idle && history.length > 0) {
// Show feedback dialog if:
// 1. User is authenticated via QWEN_OAUTH
// 2. Qwen logger is enabled (required for feedback submission)
// 3. User feedback is enabled in settings
// 4. The last message is an AI response
// 5. Random chance (25% probability)
// 6. Meets minimum requirements (tool calls > 10 OR user messages > 5)
// 7. Fatigue mechanism allows showing (not shown recently across sessions)
if (
config.getAuthType() !== AuthType.QWEN_OAUTH ||
!config.getUsageStatisticsEnabled() ||
settings.merged.ui?.enableUserFeedback === false ||
!lastMessageIsAIResponse(history) ||
Math.random() > FEEDBACK_SHOW_PROBABILITY ||
!meetsMinimumSessionRequirements(sessionStats)
) {
return;
}
// Check fatigue mechanism (synchronous)
if (shouldShowFeedbackBasedOnFatigue()) {
openFeedbackDialog();
}
}
};
checkAndShowFeedback();
}, [
streamingState,
history,
sessionStats,
isFeedbackDialogOpen,
openFeedbackDialog,
settings.merged.ui?.enableUserFeedback,
config,
]);
return {
isFeedbackDialogOpen,
openFeedbackDialog,
closeFeedbackDialog,
submitFeedback,
};
};

View File

@@ -912,7 +912,7 @@ export const useGeminiStream = (
// Reset quota error flag when starting a new query (not a continuation)
if (!options?.isContinuation) {
setModelSwitchedFromQuotaError(false);
// No quota-error / fallback routing mechanism currently; keep state minimal.
config.setQuotaErrorOccurred(false);
}
abortControllerRef.current = new AbortController();

View File

@@ -1,58 +1,21 @@
/**
* @license
* Copyright 2025 Qwen
* SPDX-License-Identifier: Apache-2.0
*/
import { useCallback } from 'react';
import { useStdin } from 'ink';
import type { EditorType } from '@qwen-code/qwen-code-core';
import {
editorCommands,
commandExists as coreCommandExists,
} from '@qwen-code/qwen-code-core';
import { spawnSync } from 'child_process';
import { useSettings } from '../contexts/SettingsContext.js';
/**
* Cache for command existence checks to avoid repeated execSync calls.
*/
const commandExistsCache = new Map<string, boolean>();
/**
* Check if a command exists in the system with caching.
* Results are cached to improve performance in test environments.
*/
function commandExists(cmd: string): boolean {
if (commandExistsCache.has(cmd)) {
return commandExistsCache.get(cmd)!;
}
const exists = coreCommandExists(cmd);
commandExistsCache.set(cmd, exists);
return exists;
}
/**
* Get the actual executable command for an editor type.
*/
function getExecutableCommand(editorType: EditorType): string {
const commandConfig = editorCommands[editorType];
const commands =
process.platform === 'win32' ? commandConfig.win32 : commandConfig.default;
const availableCommand = commands.find((cmd) => commandExists(cmd));
if (!availableCommand) {
throw new Error(
`No available editor command found for ${editorType}. ` +
`Tried: ${commands.join(', ')}. ` +
`Please install one of these editors or set a different preferredEditor in settings.`,
);
}
return availableCommand;
}
/**
* Determines the editor command to use based on user preferences and platform.
*/
function getEditorCommand(preferredEditor?: EditorType): string {
if (preferredEditor) {
return getExecutableCommand(preferredEditor);
return preferredEditor;
}
// Platform-specific defaults with UI preference for macOS
@@ -100,14 +63,8 @@ export function useLaunchEditor() {
try {
setRawMode?.(false);
// On Windows, .cmd and .bat files need shell: true
const needsShell =
process.platform === 'win32' &&
(editorCommand.endsWith('.cmd') || editorCommand.endsWith('.bat'));
const { status, error } = spawnSync(editorCommand, editorArgs, {
stdio: 'inherit',
shell: needsShell,
});
if (error) throw error;

View File

@@ -573,45 +573,6 @@ describe('useSlashCompletion', () => {
});
});
it('should map completion items with descriptions for argument suggestions', async () => {
const mockCompletionFn = vi.fn().mockResolvedValue([
{ value: 'pdf', description: 'Create PDF documents' },
{ value: 'xlsx', description: 'Work with spreadsheets' },
]);
const slashCommands = [
createTestCommand({
name: 'skills',
description: 'List available skills',
completion: mockCompletionFn,
}),
];
const { result } = renderHook(() =>
useTestHarnessForSlashCompletion(
true,
'/skills ',
slashCommands,
mockCommandContext,
),
);
await waitFor(() => {
expect(result.current.suggestions).toEqual([
{
label: 'pdf',
value: 'pdf',
description: 'Create PDF documents',
},
{
label: 'xlsx',
value: 'xlsx',
description: 'Work with spreadsheets',
},
]);
});
});
it('should call command.completion with an empty string when args start with a space', async () => {
const mockCompletionFn = vi
.fn()

View File

@@ -9,7 +9,6 @@ import { AsyncFzf } from 'fzf';
import type { Suggestion } from '../components/SuggestionsDisplay.js';
import {
CommandKind,
type CommandCompletionItem,
type CommandContext,
type SlashCommand,
} from '../commands/types.js';
@@ -216,9 +215,10 @@ function useCommandSuggestions(
)) || [];
if (!signal.aborted) {
const finalSuggestions = results
.map((item) => toSuggestion(item))
.filter((suggestion): suggestion is Suggestion => !!suggestion);
const finalSuggestions = results.map((s) => ({
label: s,
value: s,
}));
setSuggestions(finalSuggestions);
setIsLoading(false);
}
@@ -310,20 +310,6 @@ function useCommandSuggestions(
return { suggestions, isLoading };
}
function toSuggestion(item: string | CommandCompletionItem): Suggestion | null {
if (typeof item === 'string') {
return { label: item, value: item };
}
if (!item.value) {
return null;
}
return {
label: item.label ?? item.value,
value: item.value,
description: item.description,
};
}
function useCompletionPositions(
query: string | null,
parserResult: CommandParserResult,

View File

@@ -62,7 +62,7 @@ const mockConfig = {
getAllowedTools: vi.fn(() => []),
getContentGeneratorConfig: () => ({
model: 'test-model',
authType: 'gemini',
authType: 'gemini-api-key',
}),
getUseSmartEdit: () => false,
getUseModelRouter: () => false,

View File

@@ -38,10 +38,10 @@ describe('keyMatchers', () => {
[Command.NAVIGATION_DOWN]: (key: Key) => key.name === 'down',
[Command.ACCEPT_SUGGESTION]: (key: Key) =>
key.name === 'tab' || (key.name === 'return' && !key.ctrl),
// Completion navigation only uses arrow keys (not Ctrl+P/N)
// to allow Ctrl+P/N to always navigate history
[Command.COMPLETION_UP]: (key: Key) => key.name === 'up',
[Command.COMPLETION_DOWN]: (key: Key) => key.name === 'down',
[Command.COMPLETION_UP]: (key: Key) =>
key.name === 'up' || (key.ctrl && key.name === 'p'),
[Command.COMPLETION_DOWN]: (key: Key) =>
key.name === 'down' || (key.ctrl && key.name === 'n'),
[Command.ESCAPE]: (key: Key) => key.name === 'escape',
[Command.SUBMIT]: (key: Key) =>
key.name === 'return' && !key.ctrl && !key.meta && !key.paste,
@@ -164,26 +164,14 @@ describe('keyMatchers', () => {
negative: [createKey('return', { ctrl: true }), createKey('space')],
},
{
// Completion navigation only uses arrow keys (not Ctrl+P/N)
// to allow Ctrl+P/N to always navigate history
command: Command.COMPLETION_UP,
positive: [createKey('up')],
negative: [
createKey('p'),
createKey('down'),
createKey('p', { ctrl: true }),
],
positive: [createKey('up'), createKey('p', { ctrl: true })],
negative: [createKey('p'), createKey('down')],
},
{
// Completion navigation only uses arrow keys (not Ctrl+P/N)
// to allow Ctrl+P/N to always navigate history
command: Command.COMPLETION_DOWN,
positive: [createKey('down')],
negative: [
createKey('n'),
createKey('up'),
createKey('n', { ctrl: true }),
],
positive: [createKey('down'), createKey('n', { ctrl: true })],
negative: [createKey('n'), createKey('up')],
},
// Text input

View File

@@ -1,205 +0,0 @@
/**
* @license
* Copyright 2025 Qwen Team
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import {
getAvailableModelsForAuthType,
getFilteredQwenModels,
getOpenAIAvailableModelFromEnv,
isVisionModel,
getDefaultVisionModel,
AVAILABLE_MODELS_QWEN,
MAINLINE_VLM,
MAINLINE_CODER,
} from './availableModels.js';
import { AuthType, type Config } from '@qwen-code/qwen-code-core';
describe('availableModels', () => {
describe('AVAILABLE_MODELS_QWEN', () => {
it('should include coder model', () => {
const coderModel = AVAILABLE_MODELS_QWEN.find(
(m) => m.id === MAINLINE_CODER,
);
expect(coderModel).toBeDefined();
expect(coderModel?.isVision).toBeFalsy();
});
it('should include vision model', () => {
const visionModel = AVAILABLE_MODELS_QWEN.find(
(m) => m.id === MAINLINE_VLM,
);
expect(visionModel).toBeDefined();
expect(visionModel?.isVision).toBe(true);
});
});
describe('getFilteredQwenModels', () => {
it('should return all models when vision preview is enabled', () => {
const models = getFilteredQwenModels(true);
expect(models.length).toBe(AVAILABLE_MODELS_QWEN.length);
});
it('should filter out vision models when preview is disabled', () => {
const models = getFilteredQwenModels(false);
expect(models.every((m) => !m.isVision)).toBe(true);
});
});
describe('getOpenAIAvailableModelFromEnv', () => {
const originalEnv = process.env;
beforeEach(() => {
process.env = { ...originalEnv };
});
afterEach(() => {
process.env = originalEnv;
});
it('should return null when OPENAI_MODEL is not set', () => {
delete process.env['OPENAI_MODEL'];
expect(getOpenAIAvailableModelFromEnv()).toBeNull();
});
it('should return model from OPENAI_MODEL env var', () => {
process.env['OPENAI_MODEL'] = 'gpt-4-turbo';
const model = getOpenAIAvailableModelFromEnv();
expect(model?.id).toBe('gpt-4-turbo');
expect(model?.label).toBe('gpt-4-turbo');
});
it('should trim whitespace from env var', () => {
process.env['OPENAI_MODEL'] = ' gpt-4 ';
const model = getOpenAIAvailableModelFromEnv();
expect(model?.id).toBe('gpt-4');
});
});
describe('getAvailableModelsForAuthType', () => {
const originalEnv = process.env;
beforeEach(() => {
process.env = { ...originalEnv };
});
afterEach(() => {
process.env = originalEnv;
});
it('should return hard-coded qwen models for qwen-oauth', () => {
const models = getAvailableModelsForAuthType(AuthType.QWEN_OAUTH);
expect(models).toEqual(AVAILABLE_MODELS_QWEN);
});
it('should return hard-coded qwen models even when config is provided', () => {
const mockConfig = {
getAvailableModels: vi
.fn()
.mockReturnValue([
{ id: 'custom', label: 'Custom', authType: AuthType.QWEN_OAUTH },
]),
} as unknown as Config;
const models = getAvailableModelsForAuthType(
AuthType.QWEN_OAUTH,
mockConfig,
);
expect(models).toEqual(AVAILABLE_MODELS_QWEN);
});
it('should use config.getAvailableModels for openai authType when available', () => {
const mockModels = [
{
id: 'gpt-4',
label: 'GPT-4',
description: 'Test',
authType: AuthType.USE_OPENAI,
isVision: false,
},
];
const getAvailableModelsForAuthType = vi.fn().mockReturnValue(mockModels);
const mockConfigWithMethod = {
// Prefer the newer API when available.
getAvailableModelsForAuthType,
};
const models = getAvailableModelsForAuthType(
AuthType.USE_OPENAI,
mockConfigWithMethod as unknown as Config,
);
expect(getAvailableModelsForAuthType).toHaveBeenCalled();
expect(models[0].id).toBe('gpt-4');
});
it('should fallback to env var for openai when config returns empty', () => {
process.env['OPENAI_MODEL'] = 'fallback-model';
const mockConfig = {
getAvailableModelsForAuthType: vi.fn().mockReturnValue([]),
} as unknown as Config;
const models = getAvailableModelsForAuthType(
AuthType.USE_OPENAI,
mockConfig,
);
expect(models).toEqual([]);
});
it('should fallback to env var for openai when config throws', () => {
process.env['OPENAI_MODEL'] = 'fallback-model';
const mockConfig = {
getAvailableModelsForAuthType: vi.fn().mockImplementation(() => {
throw new Error('Registry not initialized');
}),
} as unknown as Config;
const models = getAvailableModelsForAuthType(
AuthType.USE_OPENAI,
mockConfig,
);
expect(models).toEqual([]);
});
it('should return env model for openai without config', () => {
process.env['OPENAI_MODEL'] = 'gpt-4-turbo';
const models = getAvailableModelsForAuthType(AuthType.USE_OPENAI);
expect(models[0].id).toBe('gpt-4-turbo');
});
it('should return empty array for openai without config or env', () => {
delete process.env['OPENAI_MODEL'];
const models = getAvailableModelsForAuthType(AuthType.USE_OPENAI);
expect(models).toEqual([]);
});
it('should return empty array for other auth types', () => {
const models = getAvailableModelsForAuthType(AuthType.USE_GEMINI);
expect(models).toEqual([]);
});
});
describe('isVisionModel', () => {
it('should return true for vision model', () => {
expect(isVisionModel(MAINLINE_VLM)).toBe(true);
});
it('should return false for non-vision model', () => {
expect(isVisionModel(MAINLINE_CODER)).toBe(false);
});
it('should return false for unknown model', () => {
expect(isVisionModel('unknown-model')).toBe(false);
});
});
describe('getDefaultVisionModel', () => {
it('should return the vision model ID', () => {
expect(getDefaultVisionModel()).toBe(MAINLINE_VLM);
});
});
});

View File

@@ -4,12 +4,7 @@
* SPDX-License-Identifier: Apache-2.0
*/
import {
AuthType,
DEFAULT_QWEN_MODEL,
type Config,
type AvailableModel as CoreAvailableModel,
} from '@qwen-code/qwen-code-core';
import { AuthType, DEFAULT_QWEN_MODEL } from '@qwen-code/qwen-code-core';
import { t } from '../../i18n/index.js';
export type AvailableModel = {
@@ -62,78 +57,20 @@ export function getFilteredQwenModels(
*/
export function getOpenAIAvailableModelFromEnv(): AvailableModel | null {
const id = process.env['OPENAI_MODEL']?.trim();
return id
? {
id,
label: id,
get description() {
return t('Configured via OPENAI_MODEL environment variable');
},
}
: null;
return id ? { id, label: id } : null;
}
export function getAnthropicAvailableModelFromEnv(): AvailableModel | null {
const id = process.env['ANTHROPIC_MODEL']?.trim();
return id
? {
id,
label: id,
get description() {
return t('Configured via ANTHROPIC_MODEL environment variable');
},
}
: null;
return id ? { id, label: id } : null;
}
/**
* Convert core AvailableModel to CLI AvailableModel format
*/
function convertCoreModelToCliModel(
coreModel: CoreAvailableModel,
): AvailableModel {
return {
id: coreModel.id,
label: coreModel.label,
description: coreModel.description,
isVision: coreModel.isVision ?? coreModel.capabilities?.vision ?? false,
};
}
/**
* Get available models for the given authType.
*
* If a Config object is provided, uses config.getAvailableModelsForAuthType().
* For qwen-oauth, always returns the hard-coded models.
* Falls back to environment variables only when no config is provided.
*/
export function getAvailableModelsForAuthType(
authType: AuthType,
config?: Config,
): AvailableModel[] {
// For qwen-oauth, always use hard-coded models, this aligns with the API gateway.
if (authType === AuthType.QWEN_OAUTH) {
return AVAILABLE_MODELS_QWEN;
}
// Use config's model registry when available
if (config) {
try {
const models = config.getAvailableModelsForAuthType(authType);
if (models.length > 0) {
return models.map(convertCoreModelToCliModel);
}
} catch {
// If config throws (e.g., not initialized), return empty array
}
// When a Config object is provided, we intentionally do NOT fall back to env-based
// "raw" models. These may reflect the currently effective config but should not be
// presented as selectable options in /model.
return [];
}
// Fall back to environment variables for specific auth types (no config provided)
switch (authType) {
case AuthType.QWEN_OAUTH:
return AVAILABLE_MODELS_QWEN;
case AuthType.USE_OPENAI: {
const openAIModel = getOpenAIAvailableModelFromEnv();
return openAIModel ? [openAIModel] : [];
@@ -143,10 +80,13 @@ export function getAvailableModelsForAuthType(
return anthropicModel ? [anthropicModel] : [];
}
default:
// For other auth types, return empty array for now
// This can be expanded later according to the design doc
return [];
}
}
/**
/**
* Hard code the default vision model as a string literal,
* until our coding model supports multimodal.

View File

@@ -201,21 +201,12 @@ export interface ToolDefinition {
description?: string;
}
export interface SkillDefinition {
name: string;
}
export type HistoryItemToolsList = HistoryItemBase & {
type: 'tools_list';
tools: ToolDefinition[];
showDescriptions: boolean;
};
export type HistoryItemSkillsList = HistoryItemBase & {
type: 'skills_list';
skills: SkillDefinition[];
};
// JSON-friendly types for using as a simple data model showing info about an
// MCP Server.
export interface JsonMcpTool {
@@ -277,7 +268,6 @@ export type HistoryItemWithoutId =
| HistoryItemCompression
| HistoryItemExtensionsList
| HistoryItemToolsList
| HistoryItemSkillsList
| HistoryItemMcpStatus;
export type HistoryItem = HistoryItemWithoutId & { id: number };
@@ -299,7 +289,6 @@ export enum MessageType {
SUMMARY = 'summary',
EXTENSIONS_LIST = 'extensions_list',
TOOLS_LIST = 'tools_list',
SKILLS_LIST = 'skills_list',
MCP_STATUS = 'mcp_status',
}

Some files were not shown because too many files have changed in this diff Show More