mirror of
https://github.com/QwenLM/qwen-code.git
synced 2025-12-24 18:49:13 +00:00
Compare commits
13 Commits
fix/e2e-te
...
feature/re
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0b5fffa507 | ||
|
|
9d1ffb4af6 | ||
|
|
bdf946a321 | ||
|
|
44de3f686c | ||
|
|
08415c9597 | ||
|
|
d360b86588 | ||
|
|
66d43dbc5d | ||
|
|
dc6dcea93d | ||
|
|
e27610789f | ||
|
|
cff88350f4 | ||
|
|
1bfe5a796a | ||
|
|
9f8ec8c0be | ||
|
|
bb6db7e492 |
6
.github/workflows/e2e.yml
vendored
6
.github/workflows/e2e.yml
vendored
@@ -44,7 +44,7 @@ jobs:
|
||||
|
||||
- name: Run E2E tests
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
|
||||
OPENAI_MODEL: ${{ secrets.OPENAI_MODEL }}
|
||||
QWEN_CODE_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
QWEN_CODE_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
|
||||
QWEN_CODE_MODEL: ${{ secrets.OPENAI_MODEL }}
|
||||
run: npm run test:integration:${{ matrix.sandbox }} -- --verbose --keep-output
|
||||
|
||||
14
Dockerfile
14
Dockerfile
@@ -1,6 +1,6 @@
|
||||
FROM docker.io/library/node:20-slim
|
||||
|
||||
ARG SANDBOX_NAME="gemini-cli-sandbox"
|
||||
ARG SANDBOX_NAME="qwen-code-sandbox"
|
||||
ARG CLI_VERSION_ARG
|
||||
ENV SANDBOX="$SANDBOX_NAME"
|
||||
ENV CLI_VERSION=$CLI_VERSION_ARG
|
||||
@@ -39,12 +39,12 @@ ENV PATH=$PATH:/usr/local/share/npm-global/bin
|
||||
# switch to non-root user node
|
||||
USER node
|
||||
|
||||
# install gemini-cli and clean up
|
||||
COPY packages/cli/dist/google-gemini-cli-*.tgz /usr/local/share/npm-global/gemini-cli.tgz
|
||||
COPY packages/core/dist/google-gemini-cli-core-*.tgz /usr/local/share/npm-global/gemini-core.tgz
|
||||
RUN npm install -g /usr/local/share/npm-global/gemini-cli.tgz /usr/local/share/npm-global/gemini-core.tgz \
|
||||
# install qwen-code and clean up
|
||||
COPY packages/cli/dist/qwen-code-*.tgz /usr/local/share/npm-global/qwen-code.tgz
|
||||
COPY packages/core/dist/qwen-code-qwen-code-core-*.tgz /usr/local/share/npm-global/qwen-code-core.tgz
|
||||
RUN npm install -g /usr/local/share/npm-global/qwen-code.tgz /usr/local/share/npm-global/qwen-code-core.tgz \
|
||||
&& npm cache clean --force \
|
||||
&& rm -f /usr/local/share/npm-global/gemini-{cli,core}.tgz
|
||||
&& rm -f /usr/local/share/npm-global/qwen-{code,code-core}.tgz
|
||||
|
||||
# default entrypoint when none specified
|
||||
CMD ["gemini"]
|
||||
CMD ["qwen"]
|
||||
245
README.md
245
README.md
@@ -1,11 +1,26 @@
|
||||
# Qwen Code
|
||||
|
||||
<div align="center">
|
||||
|
||||

|
||||
|
||||
Qwen Code is a command-line AI workflow tool adapted from [**Gemini CLI**](https://github.com/google-gemini/gemini-cli) (Please refer to [this document](./README.gemini.md) for more details), optimized for [Qwen3-Coder](https://github.com/QwenLM/Qwen3-Coder) models with enhanced parser support & tool support.
|
||||
[](https://www.npmjs.com/package/@qwen-code/qwen-code)
|
||||
[](./LICENSE)
|
||||
[](https://nodejs.org/)
|
||||
[](https://www.npmjs.com/package/@qwen-code/qwen-code)
|
||||
|
||||
**AI-powered command-line workflow tool for developers**
|
||||
|
||||
[Installation](#installation) • [Quick Start](#quick-start) • [Features](#key-features) • [Documentation](./docs/) • [Contributing](./CONTRIBUTING.md)
|
||||
|
||||
</div>
|
||||
|
||||
Qwen Code is a powerful command-line AI workflow tool adapted from [**Gemini CLI**](https://github.com/google-gemini/gemini-cli) ([details](./README.gemini.md)), specifically optimized for [Qwen3-Coder](https://github.com/QwenLM/Qwen3-Coder) models. It enhances your development workflow with advanced code understanding, automated tasks, and intelligent assistance.
|
||||
|
||||
> [!WARNING]
|
||||
> Qwen Code may issue multiple API calls per cycle, resulting in higher token usage, similar to Claude Code. We’re actively working to enhance API efficiency and improve the overall developer experience. ModelScope offers 2,000 free API calls if you are in China mainland. Please check [API config section](#api-configuration) for more details.
|
||||
> **Token Usage Notice**: Qwen Code may issue multiple API calls per cycle, resulting in higher token usage (similar to Claude Code). We're actively optimizing API efficiency.
|
||||
>
|
||||
> 💡 **Free Option**: ModelScope provides **2,000 free API calls per day** for users in mainland China. OpenRouter offers up to **1,000 free API calls per day** worldwide. For setup instructions, see [API Configuration](#api-configuration).
|
||||
|
||||
## Key Features
|
||||
|
||||
@@ -13,7 +28,7 @@ Qwen Code is a command-line AI workflow tool adapted from [**Gemini CLI**](https
|
||||
- **Workflow Automation** - Automate operational tasks like handling pull requests and complex rebases
|
||||
- **Enhanced Parser** - Adapted parser specifically optimized for Qwen-Coder models
|
||||
|
||||
## Quick Start
|
||||
## Installation
|
||||
|
||||
### Prerequisites
|
||||
|
||||
@@ -23,20 +38,14 @@ Ensure you have [Node.js version 20](https://nodejs.org/en/download) or higher i
|
||||
curl -qL https://www.npmjs.com/install.sh | sh
|
||||
```
|
||||
|
||||
### Installation
|
||||
### Install from npm
|
||||
|
||||
```bash
|
||||
npm install -g @qwen-code/qwen-code@latest
|
||||
qwen --version
|
||||
```
|
||||
|
||||
Then run from anywhere:
|
||||
|
||||
```bash
|
||||
qwen
|
||||
```
|
||||
|
||||
Or you can install it from source:
|
||||
### Install from source
|
||||
|
||||
```bash
|
||||
git clone https://github.com/QwenLM/qwen-code.git
|
||||
@@ -45,118 +54,234 @@ npm install
|
||||
npm install -g .
|
||||
```
|
||||
|
||||
We now support max session token limit, you can set it in your `.qwen/settings.json` file to save the token usage.
|
||||
For example, if you want to set the max session token limit to 32000, you can set it like this:
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Start Qwen Code
|
||||
qwen
|
||||
|
||||
# Example commands
|
||||
> Explain this codebase structure
|
||||
> Help me refactor this function
|
||||
> Generate unit tests for this module
|
||||
```
|
||||
|
||||
### Session Management
|
||||
|
||||
Control your token usage with configurable session limits to optimize costs and performance.
|
||||
|
||||
#### Configure Session Token Limit
|
||||
|
||||
Create or edit `.qwen/settings.json` in your home directory:
|
||||
|
||||
```json
|
||||
{
|
||||
"maxSessionToken": 32000
|
||||
"sessionTokenLimit": 32000
|
||||
}
|
||||
```
|
||||
|
||||
The max session means the maximum number of tokens that can be used in one chat (not the total usage during multiple tool call shoots); if you reach the limit, you can use the `/compress` command to compress the history and go on, or use `/clear` command to clear the history.
|
||||
#### Session Commands
|
||||
|
||||
- **`/compress`** - Compress conversation history to continue within token limits
|
||||
- **`/clear`** - Clear all conversation history and start fresh
|
||||
- **`/status`** - Check current token usage and limits
|
||||
|
||||
> 📝 **Note**: Session token limit applies to a single conversation, not cumulative API calls.
|
||||
|
||||
### API Configuration
|
||||
|
||||
Set your Qwen API key (In Qwen Code project, you can also set your API key in `.env` file). the `.env` file should be placed in the root directory of your current project.
|
||||
Qwen Code supports multiple API providers. You can configure your API key through environment variables or a `.env` file in your project root.
|
||||
|
||||
> ⚠️ **Notice:** <br>
|
||||
> **If you are in mainland China, please go to https://bailian.console.aliyun.com/ or https://modelscope.cn/docs/model-service/API-Inference/intro to apply for your API key** <br>
|
||||
> **If you are not in mainland China, please go to https://modelstudio.console.alibabacloud.com/ to apply for your API key**
|
||||
#### Configuration Methods
|
||||
|
||||
If you are in mainland China, you can use Qwen3-Coder through the Alibaba Cloud bailian platform.
|
||||
1. **Environment Variables**
|
||||
|
||||
```bash
|
||||
export QWEN_CODE_API_KEY="your_api_key_here"
|
||||
export QWEN_CODE_BASE_URL="your_api_endpoint"
|
||||
export QWEN_CODE_MODEL="your_model_choice"
|
||||
```
|
||||
|
||||
2. **Project `.env` File**
|
||||
Create a `.env` file in your project root:
|
||||
```env
|
||||
QWEN_CODE_API_KEY=your_api_key_here
|
||||
QWEN_CODE_BASE_URL=your_api_endpoint
|
||||
QWEN_CODE_MODEL=your_model_choice
|
||||
```
|
||||
|
||||
#### API Provider Options
|
||||
|
||||
> ⚠️ **Regional Notice:**
|
||||
>
|
||||
> - **Mainland China**: Use Alibaba Cloud Bailian or ModelScope
|
||||
> - **International**: Use Alibaba Cloud ModelStudio or OpenRouter
|
||||
|
||||
<details>
|
||||
<summary><b>🇨🇳 For Users in Mainland China</b></summary>
|
||||
|
||||
**Option 1: Alibaba Cloud Bailian** ([Apply for API Key](https://bailian.console.aliyun.com/))
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY="your_api_key_here"
|
||||
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
|
||||
export OPENAI_MODEL="qwen3-coder-plus"
|
||||
export QWEN_CODE_API_KEY="your_api_key_here"
|
||||
export QWEN_CODE_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
|
||||
export QWEN_CODE_MODEL="qwen3-coder-plus"
|
||||
```
|
||||
|
||||
If you are in mainland China, ModelScope offers 2,000 free model inference API calls per day:
|
||||
**Option 2: ModelScope (Free Tier)** ([Apply for API Key](https://modelscope.cn/docs/model-service/API-Inference/intro))
|
||||
|
||||
- ✅ **2,000 free API calls per day**
|
||||
- ⚠️ Connect your Aliyun account to avoid authentication errors
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY="your_api_key_here"
|
||||
export OPENAI_BASE_URL="https://api-inference.modelscope.cn/v1"
|
||||
export OPENAI_MODEL="Qwen/Qwen3-Coder-480B-A35B-Instruct"
|
||||
export QWEN_CODE_API_KEY="your_api_key_here"
|
||||
export QWEN_CODE_BASE_URL="https://api-inference.modelscope.cn/v1"
|
||||
export QWEN_CODE_MODEL="Qwen/Qwen3-Coder-480B-A35B-Instruct"
|
||||
```
|
||||
|
||||
If you are not in mainland China, you can use Qwen3-Coder through the Alibaba Cloud modelstuido platform.
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><b>🌍 For International Users</b></summary>
|
||||
|
||||
**Option 1: Alibaba Cloud ModelStudio** ([Apply for API Key](https://modelstudio.console.alibabacloud.com/))
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY="your_api_key_here"
|
||||
export OPENAI_BASE_URL="https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
|
||||
export OPENAI_MODEL="qwen3-coder-plus"
|
||||
export QWEN_CODE_API_KEY="your_api_key_here"
|
||||
export QWEN_CODE_BASE_URL="https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
|
||||
export QWEN_CODE_MODEL="qwen3-coder-plus"
|
||||
```
|
||||
|
||||
**Option 2: OpenRouter (Free Tier Available)** ([Apply for API Key](https://openrouter.ai/))
|
||||
|
||||
```bash
|
||||
export QWEN_CODE_API_KEY="your_api_key_here"
|
||||
export QWEN_CODE_BASE_URL="https://openrouter.ai/api/v1"
|
||||
export QWEN_CODE_MODEL="qwen/qwen3-coder:free"
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Explore Codebases
|
||||
### 🔍 Explore Codebases
|
||||
|
||||
```sh
|
||||
```bash
|
||||
cd your-project/
|
||||
qwen
|
||||
|
||||
# Architecture analysis
|
||||
> Describe the main pieces of this system's architecture
|
||||
> What are the key dependencies and how do they interact?
|
||||
> Find all API endpoints and their authentication methods
|
||||
```
|
||||
|
||||
### Code Development
|
||||
### 💻 Code Development
|
||||
|
||||
```sh
|
||||
```bash
|
||||
# Refactoring
|
||||
> Refactor this function to improve readability and performance
|
||||
> Convert this class to use dependency injection
|
||||
> Split this large module into smaller, focused components
|
||||
|
||||
# Code generation
|
||||
> Create a REST API endpoint for user management
|
||||
> Generate unit tests for the authentication module
|
||||
> Add error handling to all database operations
|
||||
```
|
||||
|
||||
### Automate Workflows
|
||||
### 🔄 Automate Workflows
|
||||
|
||||
```sh
|
||||
> Analyze git commits from the last 7 days, grouped by feature and team member
|
||||
```
|
||||
```bash
|
||||
# Git automation
|
||||
> Analyze git commits from the last 7 days, grouped by feature
|
||||
> Create a changelog from recent commits
|
||||
> Find all TODO comments and create GitHub issues
|
||||
|
||||
```sh
|
||||
# File operations
|
||||
> Convert all images in this directory to PNG format
|
||||
> Rename all test files to follow the *.test.ts pattern
|
||||
> Find and remove all console.log statements
|
||||
```
|
||||
|
||||
### 🐛 Debugging & Analysis
|
||||
|
||||
```bash
|
||||
# Performance analysis
|
||||
> Identify performance bottlenecks in this React component
|
||||
> Find all N+1 query problems in the codebase
|
||||
|
||||
# Security audit
|
||||
> Check for potential SQL injection vulnerabilities
|
||||
> Find all hardcoded credentials or API keys
|
||||
```
|
||||
|
||||
## Popular Tasks
|
||||
|
||||
### Understand New Codebases
|
||||
### 📚 Understand New Codebases
|
||||
|
||||
```text
|
||||
> What are the core business logic components?
|
||||
> What security mechanisms are in place?
|
||||
> How does the data flow work?
|
||||
> How does the data flow through the system?
|
||||
> What are the main design patterns used?
|
||||
> Generate a dependency graph for this module
|
||||
```
|
||||
|
||||
### Code Refactoring & Optimization
|
||||
### 🔨 Code Refactoring & Optimization
|
||||
|
||||
```text
|
||||
> What parts of this module can be optimized?
|
||||
> Help me refactor this class to follow better design patterns
|
||||
> Help me refactor this class to follow SOLID principles
|
||||
> Add proper error handling and logging
|
||||
> Convert callbacks to async/await pattern
|
||||
> Implement caching for expensive operations
|
||||
```
|
||||
|
||||
### Documentation & Testing
|
||||
### 📝 Documentation & Testing
|
||||
|
||||
```text
|
||||
> Generate comprehensive JSDoc comments for this function
|
||||
> Write unit tests for this component
|
||||
> Create API documentation
|
||||
> Generate comprehensive JSDoc comments for all public APIs
|
||||
> Write unit tests with edge cases for this component
|
||||
> Create API documentation in OpenAPI format
|
||||
> Add inline comments explaining complex algorithms
|
||||
> Generate a README for this module
|
||||
```
|
||||
|
||||
### 🚀 Development Acceleration
|
||||
|
||||
```text
|
||||
> Set up a new Express server with authentication
|
||||
> Create a React component with TypeScript and tests
|
||||
> Implement a rate limiter middleware
|
||||
> Add database migrations for new schema
|
||||
> Configure CI/CD pipeline for this project
|
||||
```
|
||||
|
||||
## Commands & Shortcuts
|
||||
|
||||
### Session Commands
|
||||
|
||||
- `/help` - Display available commands
|
||||
- `/clear` - Clear conversation history
|
||||
- `/compress` - Compress history to save tokens
|
||||
- `/status` - Show current session information
|
||||
- `/exit` or `/quit` - Exit Qwen Code
|
||||
|
||||
### Keyboard Shortcuts
|
||||
|
||||
- `Ctrl+C` - Cancel current operation
|
||||
- `Ctrl+D` - Exit (on empty line)
|
||||
- `Up/Down` - Navigate command history
|
||||
|
||||
## Benchmark Results
|
||||
|
||||
### Terminal-Bench
|
||||
### Terminal-Bench Performance
|
||||
|
||||
| Agent | Model | Accuracy |
|
||||
| --------- | ------------------ | -------- |
|
||||
| Qwen Code | Qwen3-Coder-480A35 | 37.5 |
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
qwen-code/
|
||||
├── packages/ # Core packages
|
||||
├── docs/ # Documentation
|
||||
├── examples/ # Example code
|
||||
└── tests/ # Test files
|
||||
```
|
||||
| Qwen Code | Qwen3-Coder-480A35 | 37.5% |
|
||||
| Qwen Code | Qwen3-Coder-30BA3B | 31.3% |
|
||||
|
||||
## Development & Contributing
|
||||
|
||||
|
||||
@@ -219,8 +219,8 @@ In addition to a project settings file, a project's `.gemini` directory can cont
|
||||
- **Description:** Configures custom system prompt templates for specific model names and base URLs. This allows you to use different system prompts for different AI models or API endpoints.
|
||||
- **Default:** `undefined` (uses default system prompt)
|
||||
- **Properties:**
|
||||
- **`baseUrls`** (array of strings, optional): Array of base URLs to exactly match against `OPENAI_BASE_URL` environment variable. If not specified, matches any base URL.
|
||||
- **`modelNames`** (array of strings, optional): Array of model names to exactly match against `OPENAI_MODEL` environment variable. If not specified, matches any model.
|
||||
- **`baseUrls`** (array of strings, optional): Array of base URLs to exactly match against `QWEN_CODE_BASE_URL` environment variable. If not specified, matches any base URL.
|
||||
- **`modelNames`** (array of strings, optional): Array of model names to exactly match against `QWEN_CODE_MODEL` environment variable. If not specified, matches any model.
|
||||
- **`template`** (string): The system prompt template to use when both baseUrl and modelNames match. Supports placeholders:
|
||||
- `{RUNTIME_VARS_IS_GIT_REPO}`: Replaced with `true` or `false` based on whether the current directory is a git repository
|
||||
- `{RUNTIME_VARS_SANDBOX}`: Replaced with the sandbox type (e.g., `"sandbox-exec"`, `"docker"`, or empty string)
|
||||
|
||||
@@ -40,9 +40,9 @@ qwen-code --openai-api-key "your-api-key-here" --model "gpt-4-turbo"
|
||||
Set the following environment variables in your shell or `.env` file:
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY="your-api-key-here"
|
||||
export OPENAI_BASE_URL="https://api.openai.com/v1" # Optional, defaults to this value
|
||||
export OPENAI_MODEL="gpt-4o" # Optional, defaults to gpt-4o
|
||||
export QWEN_CODE_API_KEY="your-api-key-here"
|
||||
export QWEN_CODE_BASE_URL="https://api.openai.com/v1" # Optional, defaults to this value
|
||||
export QWEN_CODE_MODEL="gpt-4o" # Optional, defaults to gpt-4o
|
||||
```
|
||||
|
||||
## Supported Models
|
||||
@@ -58,7 +58,7 @@ The CLI supports all OpenAI models that are available through the OpenAI API, in
|
||||
|
||||
## Custom Endpoints
|
||||
|
||||
You can use custom endpoints by setting the `OPENAI_BASE_URL` environment variable or using the `--openai-base-url` command line argument. This is useful for:
|
||||
You can use custom endpoints by setting the `QWEN_CODE_BASE_URL` environment variable or using the `--openai-base-url` command line argument. This is useful for:
|
||||
|
||||
- Using Azure OpenAI
|
||||
- Using other OpenAI-compatible APIs
|
||||
|
||||
10
package-lock.json
generated
10
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.0.1-alpha.8",
|
||||
"version": "0.0.1-alpha.12",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.0.1-alpha.8",
|
||||
"version": "0.0.1-alpha.12",
|
||||
"workspaces": [
|
||||
"packages/*"
|
||||
],
|
||||
@@ -11945,7 +11945,7 @@
|
||||
},
|
||||
"packages/cli": {
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.0.1-alpha.8",
|
||||
"version": "0.0.1-alpha.12",
|
||||
"dependencies": {
|
||||
"@qwen-code/qwen-code-core": "file:../core",
|
||||
"@types/update-notifier": "^6.0.8",
|
||||
@@ -12123,7 +12123,7 @@
|
||||
},
|
||||
"packages/core": {
|
||||
"name": "@qwen-code/qwen-code-core",
|
||||
"version": "0.0.1-alpha.8",
|
||||
"version": "0.0.1-alpha.12",
|
||||
"dependencies": {
|
||||
"@google/genai": "1.8.0",
|
||||
"@modelcontextprotocol/sdk": "^1.11.0",
|
||||
@@ -12197,7 +12197,7 @@
|
||||
},
|
||||
"packages/vscode-ide-companion": {
|
||||
"name": "@qwen-code/qwen-code-vscode-ide-companion",
|
||||
"version": "0.0.1-alpha.8",
|
||||
"version": "0.0.1-alpha.12",
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.15.1",
|
||||
"cors": "^2.8.5",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.0.1-alpha.8",
|
||||
"version": "0.0.1-alpha.12",
|
||||
"engines": {
|
||||
"node": ">=20"
|
||||
},
|
||||
@@ -13,7 +13,7 @@
|
||||
"url": "git+http://gitlab.alibaba-inc.com/Qwen-Coder/qwen-code.git"
|
||||
},
|
||||
"config": {
|
||||
"sandboxImageUri": "us-docker.pkg.dev/gemini-code-dev/gemini-cli/sandbox:0.0.1-alpha.8"
|
||||
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.0.1-alpha.12"
|
||||
},
|
||||
"scripts": {
|
||||
"start": "node scripts/start.js",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.0.1-alpha.8",
|
||||
"version": "0.0.1-alpha.12",
|
||||
"description": "Gemini CLI",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
@@ -25,7 +25,7 @@
|
||||
"dist"
|
||||
],
|
||||
"config": {
|
||||
"sandboxImageUri": "us-docker.pkg.dev/gemini-code-dev/gemini-cli/sandbox:0.0.1-alpha.8"
|
||||
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.0.1-alpha.12"
|
||||
},
|
||||
"dependencies": {
|
||||
"@qwen-code/qwen-code-core": "file:../core",
|
||||
|
||||
@@ -39,8 +39,8 @@ export const validateAuthMethod = (authMethod: string): string | null => {
|
||||
}
|
||||
|
||||
if (authMethod === AuthType.USE_OPENAI) {
|
||||
if (!process.env.OPENAI_API_KEY) {
|
||||
return 'OPENAI_API_KEY environment variable not found. You can enter it interactively or add it to your .env file.';
|
||||
if (!process.env.QWEN_CODE_API_KEY) {
|
||||
return 'QWEN_CODE_API_KEY environment variable not found. You can enter it interactively or add it to your .env file.';
|
||||
}
|
||||
return null;
|
||||
}
|
||||
@@ -49,13 +49,13 @@ export const validateAuthMethod = (authMethod: string): string | null => {
|
||||
};
|
||||
|
||||
export const setOpenAIApiKey = (apiKey: string): void => {
|
||||
process.env.OPENAI_API_KEY = apiKey;
|
||||
process.env.QWEN_CODE_API_KEY = apiKey;
|
||||
};
|
||||
|
||||
export const setOpenAIBaseUrl = (baseUrl: string): void => {
|
||||
process.env.OPENAI_BASE_URL = baseUrl;
|
||||
process.env.QWEN_CODE_BASE_URL = baseUrl;
|
||||
};
|
||||
|
||||
export const setOpenAIModel = (model: string): void => {
|
||||
process.env.OPENAI_MODEL = model;
|
||||
process.env.QWEN_CODE_MODEL = model;
|
||||
};
|
||||
|
||||
@@ -78,14 +78,15 @@ vi.mock('@qwen-code/qwen-code-core', async () => {
|
||||
getTelemetryLogPromptsEnabled(): boolean {
|
||||
return (
|
||||
(this as unknown as { telemetrySettings?: { logPrompts?: boolean } })
|
||||
.telemetrySettings?.logPrompts ?? true
|
||||
.telemetrySettings?.logPrompts ?? false
|
||||
);
|
||||
}
|
||||
|
||||
getTelemetryOtlpEndpoint(): string {
|
||||
return (
|
||||
(this as unknown as { telemetrySettings?: { otlpEndpoint?: string } })
|
||||
.telemetrySettings?.otlpEndpoint ?? 'http://localhost:4317'
|
||||
.telemetrySettings?.otlpEndpoint ??
|
||||
'http://tracing-analysis-dc-hz.aliyuncs.com:8090'
|
||||
);
|
||||
}
|
||||
|
||||
@@ -349,7 +350,9 @@ describe('loadCliConfig telemetry', () => {
|
||||
const argv = await parseArguments();
|
||||
const settings: Settings = { telemetry: { enabled: true } };
|
||||
const config = await loadCliConfig(settings, [], 'test-session', argv);
|
||||
expect(config.getTelemetryOtlpEndpoint()).toBe('http://localhost:4317');
|
||||
expect(config.getTelemetryOtlpEndpoint()).toBe(
|
||||
'http://tracing-analysis-dc-hz.aliyuncs.com:8090',
|
||||
);
|
||||
});
|
||||
|
||||
it('should use telemetry target from settings if CLI flag is not present', async () => {
|
||||
@@ -408,12 +411,12 @@ describe('loadCliConfig telemetry', () => {
|
||||
expect(config.getTelemetryLogPromptsEnabled()).toBe(false);
|
||||
});
|
||||
|
||||
it('should use default log prompts (true) if no value is provided via CLI or settings', async () => {
|
||||
it('should use default log prompts (false) if no value is provided via CLI or settings', async () => {
|
||||
process.argv = ['node', 'script.js'];
|
||||
const argv = await parseArguments();
|
||||
const settings: Settings = { telemetry: { enabled: true } };
|
||||
const config = await loadCliConfig(settings, [], 'test-session', argv);
|
||||
expect(config.getTelemetryLogPromptsEnabled()).toBe(true);
|
||||
expect(config.getTelemetryLogPromptsEnabled()).toBe(false);
|
||||
});
|
||||
|
||||
it('should set enableOpenAILogging to true when --openai-logging flag is present', async () => {
|
||||
|
||||
@@ -264,12 +264,12 @@ export async function loadCliConfig(
|
||||
|
||||
// Handle OpenAI API key from command line
|
||||
if (argv.openaiApiKey) {
|
||||
process.env.OPENAI_API_KEY = argv.openaiApiKey;
|
||||
process.env.QWEN_CODE_API_KEY = argv.openaiApiKey;
|
||||
}
|
||||
|
||||
// Handle OpenAI base URL from command line
|
||||
if (argv.openaiBaseUrl) {
|
||||
process.env.OPENAI_BASE_URL = argv.openaiBaseUrl;
|
||||
process.env.QWEN_CODE_BASE_URL = argv.openaiBaseUrl;
|
||||
}
|
||||
|
||||
// Set the context filename in the server's memoryTool module BEFORE loading memory
|
||||
|
||||
@@ -1173,7 +1173,7 @@ describe('useGeminiStream', () => {
|
||||
mockAuthType,
|
||||
undefined,
|
||||
'gemini-2.5-pro',
|
||||
'gemini-2.5-flash',
|
||||
'qwen3-coder-flash',
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -39,7 +39,7 @@ describe('parseAndFormatApiError', () => {
|
||||
);
|
||||
expect(result).toContain('[API Error: Rate limit exceeded');
|
||||
expect(result).toContain(
|
||||
'Possible quota limitations in place or slow response times detected. Switching to the gemini-2.5-flash model',
|
||||
'Possible quota limitations in place or slow response times detected. Switching to the qwen3-coder-flash model',
|
||||
);
|
||||
});
|
||||
|
||||
@@ -55,7 +55,7 @@ describe('parseAndFormatApiError', () => {
|
||||
);
|
||||
expect(result).toContain('[API Error: Rate limit exceeded');
|
||||
expect(result).toContain(
|
||||
'Possible quota limitations in place or slow response times detected. Switching to the gemini-2.5-flash model',
|
||||
'Possible quota limitations in place or slow response times detected. Switching to the qwen3-coder-flash model',
|
||||
);
|
||||
});
|
||||
|
||||
@@ -169,7 +169,7 @@ describe('parseAndFormatApiError', () => {
|
||||
);
|
||||
expect(result).toContain('[API Error: Rate limit exceeded');
|
||||
expect(result).toContain(
|
||||
'Possible quota limitations in place or slow response times detected. Switching to the gemini-2.5-flash model',
|
||||
'Possible quota limitations in place or slow response times detected. Switching to the qwen3-coder-flash model',
|
||||
);
|
||||
expect(result).not.toContain(
|
||||
'You have reached your daily gemini-2.5-pro quota limit',
|
||||
|
||||
@@ -31,9 +31,9 @@ function getContainerPath(hostPath: string): string {
|
||||
return hostPath;
|
||||
}
|
||||
|
||||
const LOCAL_DEV_SANDBOX_IMAGE_NAME = 'gemini-cli-sandbox';
|
||||
const SANDBOX_NETWORK_NAME = 'gemini-cli-sandbox';
|
||||
const SANDBOX_PROXY_NAME = 'gemini-cli-sandbox-proxy';
|
||||
const LOCAL_DEV_SANDBOX_IMAGE_NAME = 'qwen-code-sandbox';
|
||||
const SANDBOX_NETWORK_NAME = 'qwen-code-sandbox';
|
||||
const SANDBOX_PROXY_NAME = 'qwen-code-sandbox-proxy';
|
||||
const BUILTIN_SEATBELT_PROFILES = [
|
||||
'permissive-open',
|
||||
'permissive-closed',
|
||||
@@ -172,8 +172,8 @@ function entrypoint(workdir: string): string[] {
|
||||
? 'npm run debug --'
|
||||
: 'npm rebuild && npm run start --'
|
||||
: process.env.DEBUG
|
||||
? `node --inspect-brk=0.0.0.0:${process.env.DEBUG_PORT || '9229'} $(which gemini)`
|
||||
: 'gemini';
|
||||
? `node --inspect-brk=0.0.0.0:${process.env.DEBUG_PORT || '9229'} $(which qwen)`
|
||||
: 'qwen';
|
||||
|
||||
const args = [...shellCmds, cliCmd, ...cliArgs];
|
||||
|
||||
@@ -517,6 +517,17 @@ export async function start_sandbox(
|
||||
args.push('--env', `GOOGLE_API_KEY=${process.env.GOOGLE_API_KEY}`);
|
||||
}
|
||||
|
||||
// copy QWEN_CODE_API_KEY and related env vars for Qwen
|
||||
if (process.env.QWEN_CODE_API_KEY) {
|
||||
args.push('--env', `QWEN_CODE_API_KEY=${process.env.QWEN_CODE_API_KEY}`);
|
||||
}
|
||||
if (process.env.QWEN_CODE_BASE_URL) {
|
||||
args.push('--env', `QWEN_CODE_BASE_URL=${process.env.QWEN_CODE_BASE_URL}`);
|
||||
}
|
||||
if (process.env.QWEN_CODE_MODEL) {
|
||||
args.push('--env', `QWEN_CODE_MODEL=${process.env.QWEN_CODE_MODEL}`);
|
||||
}
|
||||
|
||||
// copy GOOGLE_GENAI_USE_VERTEXAI
|
||||
if (process.env.GOOGLE_GENAI_USE_VERTEXAI) {
|
||||
args.push(
|
||||
|
||||
4
packages/core/package-lock.json
generated
4
packages/core/package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "@google/gemini-cli-core",
|
||||
"version": "0.0.1-alpha.8",
|
||||
"version": "0.0.1-alpha.12",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "@google/gemini-cli-core",
|
||||
"version": "0.0.1-alpha.8",
|
||||
"version": "0.0.1-alpha.12",
|
||||
"dependencies": {
|
||||
"@google/genai": "^1.4.0",
|
||||
"@modelcontextprotocol/sdk": "^1.11.0",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@qwen-code/qwen-code-core",
|
||||
"version": "0.0.1-alpha.8",
|
||||
"version": "0.0.1-alpha.12",
|
||||
"description": "Qwen Code Core",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
|
||||
@@ -93,7 +93,7 @@ describe('Server Config (config.ts)', () => {
|
||||
const QUESTION = 'test question';
|
||||
const FULL_CONTEXT = false;
|
||||
const USER_MEMORY = 'Test User Memory';
|
||||
const TELEMETRY_SETTINGS = { enabled: false };
|
||||
const TELEMETRY_SETTINGS = { enabled: true };
|
||||
const EMBEDDING_MODEL = 'gemini-embedding';
|
||||
const SESSION_ID = 'test-session-id';
|
||||
const baseParams: ConfigParameters = {
|
||||
@@ -234,11 +234,11 @@ describe('Server Config (config.ts)', () => {
|
||||
expect(config.getTelemetryEnabled()).toBe(false);
|
||||
});
|
||||
|
||||
it('Config constructor should default telemetry to default value if not provided', () => {
|
||||
it('Config constructor should default telemetry to false if not provided', () => {
|
||||
const paramsWithoutTelemetry: ConfigParameters = { ...baseParams };
|
||||
delete paramsWithoutTelemetry.telemetry;
|
||||
const config = new Config(paramsWithoutTelemetry);
|
||||
expect(config.getTelemetryEnabled()).toBe(TELEMETRY_SETTINGS.enabled);
|
||||
expect(config.getTelemetryEnabled()).toBe(false);
|
||||
});
|
||||
|
||||
it('should have a getFileService method that returns FileDiscoveryService', () => {
|
||||
@@ -285,20 +285,20 @@ describe('Server Config (config.ts)', () => {
|
||||
expect(config.getTelemetryLogPromptsEnabled()).toBe(false);
|
||||
});
|
||||
|
||||
it('should return default logPrompts setting (true) if not provided', () => {
|
||||
it('should return default logPrompts setting (false) if not provided', () => {
|
||||
const params: ConfigParameters = {
|
||||
...baseParams,
|
||||
telemetry: { enabled: true },
|
||||
};
|
||||
const config = new Config(params);
|
||||
expect(config.getTelemetryLogPromptsEnabled()).toBe(true);
|
||||
expect(config.getTelemetryLogPromptsEnabled()).toBe(false);
|
||||
});
|
||||
|
||||
it('should return default logPrompts setting (true) if telemetry object is not provided', () => {
|
||||
it('should return default logPrompts setting (false) if telemetry object is not provided', () => {
|
||||
const paramsWithoutTelemetry: ConfigParameters = { ...baseParams };
|
||||
delete paramsWithoutTelemetry.telemetry;
|
||||
const config = new Config(paramsWithoutTelemetry);
|
||||
expect(config.getTelemetryLogPromptsEnabled()).toBe(true);
|
||||
expect(config.getTelemetryLogPromptsEnabled()).toBe(false);
|
||||
});
|
||||
|
||||
it('should return default telemetry target if telemetry object is not provided', () => {
|
||||
|
||||
@@ -37,13 +37,11 @@ import {
|
||||
DEFAULT_TELEMETRY_TARGET,
|
||||
DEFAULT_OTLP_ENDPOINT,
|
||||
TelemetryTarget,
|
||||
StartSessionEvent,
|
||||
} from '../telemetry/index.js';
|
||||
import {
|
||||
DEFAULT_GEMINI_EMBEDDING_MODEL,
|
||||
DEFAULT_GEMINI_FLASH_MODEL,
|
||||
} from './models.js';
|
||||
import { ClearcutLogger } from '../telemetry/clearcut-logger/clearcut-logger.js';
|
||||
|
||||
export enum ApprovalMode {
|
||||
DEFAULT = 'default',
|
||||
@@ -249,7 +247,7 @@ export class Config {
|
||||
enabled: params.telemetry?.enabled ?? false,
|
||||
target: params.telemetry?.target ?? DEFAULT_TELEMETRY_TARGET,
|
||||
otlpEndpoint: params.telemetry?.otlpEndpoint ?? DEFAULT_OTLP_ENDPOINT,
|
||||
logPrompts: params.telemetry?.logPrompts ?? true,
|
||||
logPrompts: params.telemetry?.logPrompts ?? false,
|
||||
};
|
||||
this.usageStatisticsEnabled = params.usageStatisticsEnabled ?? true;
|
||||
|
||||
@@ -285,9 +283,10 @@ export class Config {
|
||||
}
|
||||
|
||||
if (this.getUsageStatisticsEnabled()) {
|
||||
ClearcutLogger.getInstance(this)?.logStartSessionEvent(
|
||||
new StartSessionEvent(this),
|
||||
);
|
||||
// ClearcutLogger.getInstance(this)?.logStartSessionEvent(
|
||||
// new StartSessionEvent(this),
|
||||
// );
|
||||
console.log('ClearcutLogger disabled - no data collection.');
|
||||
} else {
|
||||
console.log('Data collection is disabled.');
|
||||
}
|
||||
@@ -475,7 +474,7 @@ export class Config {
|
||||
}
|
||||
|
||||
getTelemetryLogPromptsEnabled(): boolean {
|
||||
return this.telemetrySettings.logPrompts ?? true;
|
||||
return this.telemetrySettings.logPrompts ?? false;
|
||||
}
|
||||
|
||||
getTelemetryOtlpEndpoint(): string {
|
||||
@@ -530,7 +529,7 @@ export class Config {
|
||||
}
|
||||
|
||||
getUsageStatisticsEnabled(): boolean {
|
||||
return false; // 禁用遥测统计,防止网络请求
|
||||
return this.usageStatisticsEnabled;
|
||||
}
|
||||
|
||||
getExtensionContextFilePaths(): string[] {
|
||||
|
||||
@@ -5,5 +5,5 @@
|
||||
*/
|
||||
|
||||
export const DEFAULT_GEMINI_MODEL = 'qwen3-coder-plus';
|
||||
export const DEFAULT_GEMINI_FLASH_MODEL = 'gemini-2.5-flash';
|
||||
export const DEFAULT_GEMINI_FLASH_MODEL = 'qwen3-coder-flash';
|
||||
export const DEFAULT_GEMINI_EMBEDDING_MODEL = 'gemini-embedding-001';
|
||||
|
||||
@@ -34,7 +34,7 @@ describe('OpenAIContentGenerator Timeout Handling', () => {
|
||||
vi.clearAllMocks();
|
||||
|
||||
// Mock environment variables
|
||||
vi.stubEnv('OPENAI_BASE_URL', '');
|
||||
vi.stubEnv('QWEN_CODE_BASE_URL', '');
|
||||
|
||||
// Mock config
|
||||
mockConfig = {
|
||||
|
||||
@@ -75,7 +75,7 @@ export async function createContentGeneratorConfig(
|
||||
const googleApiKey = process.env.GOOGLE_API_KEY || undefined;
|
||||
const googleCloudProject = process.env.GOOGLE_CLOUD_PROJECT || undefined;
|
||||
const googleCloudLocation = process.env.GOOGLE_CLOUD_LOCATION || undefined;
|
||||
const openaiApiKey = process.env.OPENAI_API_KEY;
|
||||
const openaiApiKey = process.env.QWEN_CODE_API_KEY;
|
||||
|
||||
// Use runtime model from config if available, otherwise fallback to parameter or default
|
||||
const effectiveModel = model || DEFAULT_GEMINI_MODEL;
|
||||
@@ -117,7 +117,7 @@ export async function createContentGeneratorConfig(
|
||||
if (authType === AuthType.USE_OPENAI && openaiApiKey) {
|
||||
contentGeneratorConfig.apiKey = openaiApiKey;
|
||||
contentGeneratorConfig.model =
|
||||
process.env.OPENAI_MODEL || DEFAULT_GEMINI_MODEL;
|
||||
process.env.QWEN_CODE_MODEL || DEFAULT_GEMINI_MODEL;
|
||||
|
||||
return contentGeneratorConfig;
|
||||
}
|
||||
|
||||
@@ -97,7 +97,7 @@ export class OpenAIContentGenerator implements ContentGenerator {
|
||||
constructor(apiKey: string, model: string, config: Config) {
|
||||
this.model = model;
|
||||
this.config = config;
|
||||
const baseURL = process.env.OPENAI_BASE_URL || '';
|
||||
const baseURL = process.env.QWEN_CODE_BASE_URL || '';
|
||||
|
||||
// Configure timeout settings - using progressive timeouts
|
||||
const timeoutConfig = {
|
||||
@@ -118,11 +118,21 @@ export class OpenAIContentGenerator implements ContentGenerator {
|
||||
timeoutConfig.maxRetries = contentGeneratorConfig.maxRetries;
|
||||
}
|
||||
|
||||
// Check if using OpenRouter and add required headers
|
||||
const isOpenRouter = baseURL.includes('openrouter.ai');
|
||||
const defaultHeaders = isOpenRouter
|
||||
? {
|
||||
'HTTP-Referer': 'https://github.com/QwenLM/qwen-code.git',
|
||||
'X-Title': 'Qwen Code',
|
||||
}
|
||||
: undefined;
|
||||
|
||||
this.client = new OpenAI({
|
||||
apiKey,
|
||||
baseURL,
|
||||
timeout: timeoutConfig.timeout,
|
||||
maxRetries: timeoutConfig.maxRetries,
|
||||
defaultHeaders,
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
@@ -130,8 +130,8 @@ describe('URL matching with trailing slash compatibility', () => {
|
||||
// Test case 1: No trailing slash in config, actual URL has trailing slash
|
||||
process.env = {
|
||||
...originalEnv,
|
||||
OPENAI_BASE_URL: 'https://api.example.com/',
|
||||
OPENAI_MODEL: 'gpt-4',
|
||||
QWEN_CODE_BASE_URL: 'https://api.example.com/',
|
||||
QWEN_CODE_MODEL: 'gpt-4',
|
||||
};
|
||||
|
||||
const result1 = getCoreSystemPrompt(undefined, config);
|
||||
@@ -140,8 +140,8 @@ describe('URL matching with trailing slash compatibility', () => {
|
||||
// Test case 2: Config has trailing slash, actual URL has no trailing slash
|
||||
process.env = {
|
||||
...originalEnv,
|
||||
OPENAI_BASE_URL: 'https://api.openai.com',
|
||||
OPENAI_MODEL: 'gpt-3.5-turbo',
|
||||
QWEN_CODE_BASE_URL: 'https://api.openai.com',
|
||||
QWEN_CODE_MODEL: 'gpt-3.5-turbo',
|
||||
};
|
||||
|
||||
const result2 = getCoreSystemPrompt(undefined, config);
|
||||
@@ -150,8 +150,8 @@ describe('URL matching with trailing slash compatibility', () => {
|
||||
// Test case 3: No trailing slash in config, actual URL has no trailing slash
|
||||
process.env = {
|
||||
...originalEnv,
|
||||
OPENAI_BASE_URL: 'https://api.example.com',
|
||||
OPENAI_MODEL: 'gpt-4',
|
||||
QWEN_CODE_BASE_URL: 'https://api.example.com',
|
||||
QWEN_CODE_MODEL: 'gpt-4',
|
||||
};
|
||||
|
||||
const result3 = getCoreSystemPrompt(undefined, config);
|
||||
@@ -160,8 +160,8 @@ describe('URL matching with trailing slash compatibility', () => {
|
||||
// Test case 4: Config has trailing slash, actual URL has trailing slash
|
||||
process.env = {
|
||||
...originalEnv,
|
||||
OPENAI_BASE_URL: 'https://api.openai.com/',
|
||||
OPENAI_MODEL: 'gpt-3.5-turbo',
|
||||
QWEN_CODE_BASE_URL: 'https://api.openai.com/',
|
||||
QWEN_CODE_MODEL: 'gpt-3.5-turbo',
|
||||
};
|
||||
|
||||
const result4 = getCoreSystemPrompt(undefined, config);
|
||||
@@ -187,8 +187,8 @@ describe('URL matching with trailing slash compatibility', () => {
|
||||
// Test case: URLs do not match
|
||||
process.env = {
|
||||
...originalEnv,
|
||||
OPENAI_BASE_URL: 'https://api.different.com',
|
||||
OPENAI_MODEL: 'gpt-4',
|
||||
QWEN_CODE_BASE_URL: 'https://api.different.com',
|
||||
QWEN_CODE_MODEL: 'gpt-4',
|
||||
};
|
||||
|
||||
const result = getCoreSystemPrompt(undefined, config);
|
||||
|
||||
@@ -65,8 +65,8 @@ export function getCoreSystemPrompt(
|
||||
|
||||
// Check for system prompt mappings from global config
|
||||
if (config?.systemPromptMappings) {
|
||||
const currentModel = process.env.OPENAI_MODEL || DEFAULT_GEMINI_MODEL;
|
||||
const currentBaseUrl = process.env.OPENAI_BASE_URL || '';
|
||||
const currentModel = process.env.QWEN_CODE_MODEL || DEFAULT_GEMINI_MODEL;
|
||||
const currentBaseUrl = process.env.QWEN_CODE_BASE_URL || '';
|
||||
|
||||
const matchedMapping = config.systemPromptMappings.find((mapping) => {
|
||||
const { baseUrls, modelNames } = mapping;
|
||||
|
||||
@@ -54,9 +54,13 @@ export class ClearcutLogger {
|
||||
this.config = config;
|
||||
}
|
||||
|
||||
static getInstance(_config?: Config): ClearcutLogger | undefined {
|
||||
// Disable Clearcut Logger,to avoid network request
|
||||
return undefined;
|
||||
static getInstance(config?: Config): ClearcutLogger | undefined {
|
||||
if (config === undefined || !config?.getUsageStatisticsEnabled())
|
||||
return undefined;
|
||||
if (!ClearcutLogger.instance) {
|
||||
ClearcutLogger.instance = new ClearcutLogger(config);
|
||||
}
|
||||
return ClearcutLogger.instance;
|
||||
}
|
||||
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Clearcut expects this format.
|
||||
|
||||
@@ -4,20 +4,20 @@
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
export const SERVICE_NAME = 'gemini-cli';
|
||||
export const SERVICE_NAME = 'qwen-code';
|
||||
|
||||
export const EVENT_USER_PROMPT = 'gemini_cli.user_prompt';
|
||||
export const EVENT_TOOL_CALL = 'gemini_cli.tool_call';
|
||||
export const EVENT_API_REQUEST = 'gemini_cli.api_request';
|
||||
export const EVENT_API_ERROR = 'gemini_cli.api_error';
|
||||
export const EVENT_API_RESPONSE = 'gemini_cli.api_response';
|
||||
export const EVENT_CLI_CONFIG = 'gemini_cli.config';
|
||||
export const EVENT_FLASH_FALLBACK = 'gemini_cli.flash_fallback';
|
||||
export const EVENT_USER_PROMPT = 'qwen-code.user_prompt';
|
||||
export const EVENT_TOOL_CALL = 'qwen-code.tool_call';
|
||||
export const EVENT_API_REQUEST = 'qwen-code.api_request';
|
||||
export const EVENT_API_ERROR = 'qwen-code.api_error';
|
||||
export const EVENT_API_RESPONSE = 'qwen-code.api_response';
|
||||
export const EVENT_CLI_CONFIG = 'qwen-code.config';
|
||||
export const EVENT_FLASH_FALLBACK = 'qwen-code.flash_fallback';
|
||||
|
||||
export const METRIC_TOOL_CALL_COUNT = 'gemini_cli.tool.call.count';
|
||||
export const METRIC_TOOL_CALL_LATENCY = 'gemini_cli.tool.call.latency';
|
||||
export const METRIC_API_REQUEST_COUNT = 'gemini_cli.api.request.count';
|
||||
export const METRIC_API_REQUEST_LATENCY = 'gemini_cli.api.request.latency';
|
||||
export const METRIC_TOKEN_USAGE = 'gemini_cli.token.usage';
|
||||
export const METRIC_SESSION_COUNT = 'gemini_cli.session.count';
|
||||
export const METRIC_FILE_OPERATION_COUNT = 'gemini_cli.file.operation.count';
|
||||
export const METRIC_TOOL_CALL_COUNT = 'qwen-code.tool.call.count';
|
||||
export const METRIC_TOOL_CALL_LATENCY = 'qwen-code.tool.call.latency';
|
||||
export const METRIC_API_REQUEST_COUNT = 'qwen-code.api.request.count';
|
||||
export const METRIC_API_REQUEST_LATENCY = 'qwen-code.api.request.latency';
|
||||
export const METRIC_TOKEN_USAGE = 'qwen-code.token.usage';
|
||||
export const METRIC_SESSION_COUNT = 'qwen-code.session.count';
|
||||
export const METRIC_FILE_OPERATION_COUNT = 'qwen-code.file.operation.count';
|
||||
|
||||
@@ -6,11 +6,11 @@
|
||||
|
||||
export enum TelemetryTarget {
|
||||
GCP = 'gcp',
|
||||
LOCAL = 'local',
|
||||
QW = 'qw',
|
||||
}
|
||||
|
||||
const DEFAULT_TELEMETRY_TARGET = TelemetryTarget.LOCAL;
|
||||
const DEFAULT_OTLP_ENDPOINT = 'http://localhost:4317';
|
||||
const DEFAULT_TELEMETRY_TARGET = TelemetryTarget.QW;
|
||||
const DEFAULT_OTLP_ENDPOINT = 'http://tracing-analysis-dc-hz.aliyuncs.com:8090';
|
||||
|
||||
export { DEFAULT_TELEMETRY_TARGET, DEFAULT_OTLP_ENDPOINT };
|
||||
export {
|
||||
|
||||
@@ -57,7 +57,6 @@ describe('loggers', () => {
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks(); // 清除之前测试的 mock 调用
|
||||
vi.spyOn(sdk, 'isTelemetrySdkInitialized').mockReturnValue(true);
|
||||
vi.spyOn(logs, 'getLogger').mockReturnValue(mockLogger);
|
||||
vi.spyOn(uiTelemetry.uiTelemetryService, 'addEvent').mockImplementation(
|
||||
@@ -147,7 +146,7 @@ describe('loggers', () => {
|
||||
'event.name': EVENT_USER_PROMPT,
|
||||
'event.timestamp': '2025-01-01T00:00:00.000Z',
|
||||
prompt_length: 11,
|
||||
// 移除 prompt 字段,因为 shouldLogUserPrompts 现在返回 false
|
||||
prompt: 'test-prompt',
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
*/
|
||||
|
||||
import { logs, LogRecord, LogAttributes } from '@opentelemetry/api-logs';
|
||||
import { trace, context } from '@opentelemetry/api';
|
||||
import { SemanticAttributes } from '@opentelemetry/semantic-conventions';
|
||||
import { Config } from '../config/config.js';
|
||||
import {
|
||||
@@ -35,10 +36,11 @@ import {
|
||||
} from './metrics.js';
|
||||
import { isTelemetrySdkInitialized } from './sdk.js';
|
||||
import { uiTelemetryService, UiEvent } from './uiTelemetry.js';
|
||||
import { ClearcutLogger } from './clearcut-logger/clearcut-logger.js';
|
||||
// import { ClearcutLogger } from './clearcut-logger/clearcut-logger.js';
|
||||
import { safeJsonStringify } from '../utils/safeJsonStringify.js';
|
||||
|
||||
const shouldLogUserPrompts = (_config: Config): boolean => false; // 禁用用户提示日志
|
||||
const shouldLogUserPrompts = (config: Config): boolean =>
|
||||
config.getTelemetryLogPromptsEnabled();
|
||||
|
||||
function getCommonAttributes(config: Config): LogAttributes {
|
||||
return {
|
||||
@@ -46,11 +48,32 @@ function getCommonAttributes(config: Config): LogAttributes {
|
||||
};
|
||||
}
|
||||
|
||||
// Helper function to create spans and emit logs within span context
|
||||
function logWithSpan(
|
||||
spanName: string,
|
||||
logBody: string,
|
||||
attributes: LogAttributes,
|
||||
): void {
|
||||
const tracer = trace.getTracer(SERVICE_NAME);
|
||||
const span = tracer.startSpan(spanName);
|
||||
|
||||
context.with(trace.setSpan(context.active(), span), () => {
|
||||
const logger = logs.getLogger(SERVICE_NAME);
|
||||
const logRecord: LogRecord = {
|
||||
body: logBody,
|
||||
attributes,
|
||||
};
|
||||
logger.emit(logRecord);
|
||||
});
|
||||
|
||||
span.end();
|
||||
}
|
||||
|
||||
export function logCliConfiguration(
|
||||
config: Config,
|
||||
event: StartSessionEvent,
|
||||
): void {
|
||||
ClearcutLogger.getInstance(config)?.logStartSessionEvent(event);
|
||||
// ClearcutLogger.getInstance(config)?.logStartSessionEvent(event);
|
||||
if (!isTelemetrySdkInitialized()) return;
|
||||
|
||||
const attributes: LogAttributes = {
|
||||
@@ -70,16 +93,11 @@ export function logCliConfiguration(
|
||||
mcp_servers: event.mcp_servers,
|
||||
};
|
||||
|
||||
const logger = logs.getLogger(SERVICE_NAME);
|
||||
const logRecord: LogRecord = {
|
||||
body: 'CLI configuration loaded.',
|
||||
attributes,
|
||||
};
|
||||
logger.emit(logRecord);
|
||||
logWithSpan('cli.configuration', 'CLI configuration loaded.', attributes);
|
||||
}
|
||||
|
||||
export function logUserPrompt(config: Config, event: UserPromptEvent): void {
|
||||
ClearcutLogger.getInstance(config)?.logNewPromptEvent(event);
|
||||
// ClearcutLogger.getInstance(config)?.logNewPromptEvent(event);
|
||||
if (!isTelemetrySdkInitialized()) return;
|
||||
|
||||
const attributes: LogAttributes = {
|
||||
@@ -93,12 +111,11 @@ export function logUserPrompt(config: Config, event: UserPromptEvent): void {
|
||||
attributes.prompt = event.prompt;
|
||||
}
|
||||
|
||||
const logger = logs.getLogger(SERVICE_NAME);
|
||||
const logRecord: LogRecord = {
|
||||
body: `User prompt. Length: ${event.prompt_length}.`,
|
||||
logWithSpan(
|
||||
'user.prompt',
|
||||
`User prompt. Length: ${event.prompt_length}.`,
|
||||
attributes,
|
||||
};
|
||||
logger.emit(logRecord);
|
||||
);
|
||||
}
|
||||
|
||||
export function logToolCall(config: Config, event: ToolCallEvent): void {
|
||||
@@ -108,7 +125,7 @@ export function logToolCall(config: Config, event: ToolCallEvent): void {
|
||||
'event.timestamp': new Date().toISOString(),
|
||||
} as UiEvent;
|
||||
uiTelemetryService.addEvent(uiEvent);
|
||||
ClearcutLogger.getInstance(config)?.logToolCallEvent(event);
|
||||
// ClearcutLogger.getInstance(config)?.logToolCallEvent(event);
|
||||
if (!isTelemetrySdkInitialized()) return;
|
||||
|
||||
const attributes: LogAttributes = {
|
||||
@@ -125,12 +142,11 @@ export function logToolCall(config: Config, event: ToolCallEvent): void {
|
||||
}
|
||||
}
|
||||
|
||||
const logger = logs.getLogger(SERVICE_NAME);
|
||||
const logRecord: LogRecord = {
|
||||
body: `Tool call: ${event.function_name}${event.decision ? `. Decision: ${event.decision}` : ''}. Success: ${event.success}. Duration: ${event.duration_ms}ms.`,
|
||||
logWithSpan(
|
||||
`tool.${event.function_name}`,
|
||||
`Tool call: ${event.function_name}${event.decision ? `. Decision: ${event.decision}` : ''}. Success: ${event.success}. Duration: ${event.duration_ms}ms.`,
|
||||
attributes,
|
||||
};
|
||||
logger.emit(logRecord);
|
||||
);
|
||||
recordToolCallMetrics(
|
||||
config,
|
||||
event.function_name,
|
||||
@@ -141,7 +157,7 @@ export function logToolCall(config: Config, event: ToolCallEvent): void {
|
||||
}
|
||||
|
||||
export function logApiRequest(config: Config, event: ApiRequestEvent): void {
|
||||
ClearcutLogger.getInstance(config)?.logApiRequestEvent(event);
|
||||
// ClearcutLogger.getInstance(config)?.logApiRequestEvent(event);
|
||||
if (!isTelemetrySdkInitialized()) return;
|
||||
|
||||
const attributes: LogAttributes = {
|
||||
@@ -151,19 +167,18 @@ export function logApiRequest(config: Config, event: ApiRequestEvent): void {
|
||||
'event.timestamp': new Date().toISOString(),
|
||||
};
|
||||
|
||||
const logger = logs.getLogger(SERVICE_NAME);
|
||||
const logRecord: LogRecord = {
|
||||
body: `API request to ${event.model}.`,
|
||||
logWithSpan(
|
||||
`api.request.${event.model}`,
|
||||
`API request to ${event.model}.`,
|
||||
attributes,
|
||||
};
|
||||
logger.emit(logRecord);
|
||||
);
|
||||
}
|
||||
|
||||
export function logFlashFallback(
|
||||
config: Config,
|
||||
event: FlashFallbackEvent,
|
||||
): void {
|
||||
ClearcutLogger.getInstance(config)?.logFlashFallbackEvent(event);
|
||||
// ClearcutLogger.getInstance(config)?.logFlashFallbackEvent(event);
|
||||
if (!isTelemetrySdkInitialized()) return;
|
||||
|
||||
const attributes: LogAttributes = {
|
||||
@@ -173,12 +188,11 @@ export function logFlashFallback(
|
||||
'event.timestamp': new Date().toISOString(),
|
||||
};
|
||||
|
||||
const logger = logs.getLogger(SERVICE_NAME);
|
||||
const logRecord: LogRecord = {
|
||||
body: `Switching to flash as Fallback.`,
|
||||
logWithSpan(
|
||||
'api.flash_fallback',
|
||||
'Switching to flash as Fallback.',
|
||||
attributes,
|
||||
};
|
||||
logger.emit(logRecord);
|
||||
);
|
||||
}
|
||||
|
||||
export function logApiError(config: Config, event: ApiErrorEvent): void {
|
||||
@@ -188,7 +202,7 @@ export function logApiError(config: Config, event: ApiErrorEvent): void {
|
||||
'event.timestamp': new Date().toISOString(),
|
||||
} as UiEvent;
|
||||
uiTelemetryService.addEvent(uiEvent);
|
||||
ClearcutLogger.getInstance(config)?.logApiErrorEvent(event);
|
||||
// ClearcutLogger.getInstance(config)?.logApiErrorEvent(event);
|
||||
if (!isTelemetrySdkInitialized()) return;
|
||||
|
||||
const attributes: LogAttributes = {
|
||||
@@ -208,12 +222,11 @@ export function logApiError(config: Config, event: ApiErrorEvent): void {
|
||||
attributes[SemanticAttributes.HTTP_STATUS_CODE] = event.status_code;
|
||||
}
|
||||
|
||||
const logger = logs.getLogger(SERVICE_NAME);
|
||||
const logRecord: LogRecord = {
|
||||
body: `API error for ${event.model}. Error: ${event.error}. Duration: ${event.duration_ms}ms.`,
|
||||
logWithSpan(
|
||||
`api.error.${event.model}`,
|
||||
`API error for ${event.model}. Error: ${event.error}. Duration: ${event.duration_ms}ms.`,
|
||||
attributes,
|
||||
};
|
||||
logger.emit(logRecord);
|
||||
);
|
||||
recordApiErrorMetrics(
|
||||
config,
|
||||
event.model,
|
||||
@@ -230,7 +243,7 @@ export function logApiResponse(config: Config, event: ApiResponseEvent): void {
|
||||
'event.timestamp': new Date().toISOString(),
|
||||
} as UiEvent;
|
||||
uiTelemetryService.addEvent(uiEvent);
|
||||
ClearcutLogger.getInstance(config)?.logApiResponseEvent(event);
|
||||
// ClearcutLogger.getInstance(config)?.logApiResponseEvent(event);
|
||||
if (!isTelemetrySdkInitialized()) return;
|
||||
const attributes: LogAttributes = {
|
||||
...getCommonAttributes(config),
|
||||
@@ -249,12 +262,11 @@ export function logApiResponse(config: Config, event: ApiResponseEvent): void {
|
||||
}
|
||||
}
|
||||
|
||||
const logger = logs.getLogger(SERVICE_NAME);
|
||||
const logRecord: LogRecord = {
|
||||
body: `API response from ${event.model}. Status: ${event.status_code || 'N/A'}. Duration: ${event.duration_ms}ms.`,
|
||||
logWithSpan(
|
||||
`api.response.${event.model}`,
|
||||
`API response from ${event.model}. Status: ${event.status_code || 'N/A'}. Duration: ${event.duration_ms}ms.`,
|
||||
attributes,
|
||||
};
|
||||
logger.emit(logRecord);
|
||||
);
|
||||
recordApiResponseMetrics(
|
||||
config,
|
||||
event.model,
|
||||
@@ -293,7 +305,7 @@ export function logLoopDetected(
|
||||
config: Config,
|
||||
event: LoopDetectedEvent,
|
||||
): void {
|
||||
ClearcutLogger.getInstance(config)?.logLoopDetectedEvent(event);
|
||||
// ClearcutLogger.getInstance(config)?.logLoopDetectedEvent(event);
|
||||
if (!isTelemetrySdkInitialized()) return;
|
||||
|
||||
const attributes: LogAttributes = {
|
||||
@@ -301,10 +313,9 @@ export function logLoopDetected(
|
||||
...event,
|
||||
};
|
||||
|
||||
const logger = logs.getLogger(SERVICE_NAME);
|
||||
const logRecord: LogRecord = {
|
||||
body: `Loop detected. Type: ${event.loop_type}.`,
|
||||
logWithSpan(
|
||||
'loop.detected',
|
||||
`Loop detected. Type: ${event.loop_type}.`,
|
||||
attributes,
|
||||
};
|
||||
logger.emit(logRecord);
|
||||
);
|
||||
}
|
||||
|
||||
@@ -6,29 +6,23 @@
|
||||
|
||||
import { DiagConsoleLogger, DiagLogLevel, diag } from '@opentelemetry/api';
|
||||
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-grpc';
|
||||
import { OTLPLogExporter } from '@opentelemetry/exporter-logs-otlp-grpc';
|
||||
import { OTLPMetricExporter } from '@opentelemetry/exporter-metrics-otlp-grpc';
|
||||
import { CompressionAlgorithm } from '@opentelemetry/otlp-exporter-base';
|
||||
import { Metadata } from '@grpc/grpc-js';
|
||||
import { NodeSDK } from '@opentelemetry/sdk-node';
|
||||
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions';
|
||||
import { Resource } from '@opentelemetry/resources';
|
||||
import {
|
||||
BatchSpanProcessor,
|
||||
ConsoleSpanExporter,
|
||||
} from '@opentelemetry/sdk-trace-node';
|
||||
import {
|
||||
BatchLogRecordProcessor,
|
||||
ConsoleLogRecordExporter,
|
||||
} from '@opentelemetry/sdk-logs';
|
||||
import {
|
||||
ConsoleMetricExporter,
|
||||
PeriodicExportingMetricReader,
|
||||
} from '@opentelemetry/sdk-metrics';
|
||||
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-node';
|
||||
import { BatchLogRecordProcessor } from '@opentelemetry/sdk-logs';
|
||||
import { PeriodicExportingMetricReader } from '@opentelemetry/sdk-metrics';
|
||||
import type { ReadableSpan } from '@opentelemetry/sdk-trace-base';
|
||||
import type { LogRecord } from '@opentelemetry/sdk-logs';
|
||||
import type { ResourceMetrics } from '@opentelemetry/sdk-metrics';
|
||||
import type { ExportResult } from '@opentelemetry/core';
|
||||
import { HttpInstrumentation } from '@opentelemetry/instrumentation-http';
|
||||
import { Config } from '../config/config.js';
|
||||
import { SERVICE_NAME } from './constants.js';
|
||||
import { initializeMetrics } from './metrics.js';
|
||||
import { ClearcutLogger } from './clearcut-logger/clearcut-logger.js';
|
||||
|
||||
// For troubleshooting, set the log level to DiagLogLevel.DEBUG
|
||||
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.INFO);
|
||||
@@ -75,28 +69,60 @@ export function initializeTelemetry(config: Config): void {
|
||||
const grpcParsedEndpoint = parseGrpcEndpoint(otlpEndpoint);
|
||||
const useOtlp = !!grpcParsedEndpoint;
|
||||
|
||||
const metadata = new Metadata();
|
||||
metadata.set(
|
||||
'Authentication',
|
||||
'gb7x9m2kzp@8f4e3b6c9d2a1e5_qw7x9m2kzp@19a8c5f2b4e7d93',
|
||||
);
|
||||
|
||||
const spanExporter = useOtlp
|
||||
? new OTLPTraceExporter({
|
||||
url: grpcParsedEndpoint,
|
||||
compression: CompressionAlgorithm.GZIP,
|
||||
metadata,
|
||||
})
|
||||
: new ConsoleSpanExporter();
|
||||
const logExporter = useOtlp
|
||||
? new OTLPLogExporter({
|
||||
url: grpcParsedEndpoint,
|
||||
compression: CompressionAlgorithm.GZIP,
|
||||
})
|
||||
: new ConsoleLogRecordExporter();
|
||||
: {
|
||||
export: (
|
||||
spans: ReadableSpan[],
|
||||
callback: (result: ExportResult) => void,
|
||||
) => callback({ code: 0 }),
|
||||
forceFlush: () => Promise.resolve(),
|
||||
shutdown: () => Promise.resolve(),
|
||||
};
|
||||
|
||||
// FIXME: Temporarily disable OTLP log export due to gRPC endpoint not supporting LogsService
|
||||
// const logExporter = useOtlp
|
||||
// ? new OTLPLogExporter({
|
||||
// url: grpcParsedEndpoint,
|
||||
// compression: CompressionAlgorithm.GZIP,
|
||||
// metadata: _metadata,
|
||||
// })
|
||||
// : new ConsoleLogRecordExporter();
|
||||
|
||||
// Create a no-op log exporter to avoid cluttering console output
|
||||
const logExporter = {
|
||||
export: (logs: LogRecord[], callback: (result: ExportResult) => void) =>
|
||||
callback({ code: 0 }),
|
||||
shutdown: () => Promise.resolve(),
|
||||
};
|
||||
const metricReader = useOtlp
|
||||
? new PeriodicExportingMetricReader({
|
||||
exporter: new OTLPMetricExporter({
|
||||
url: grpcParsedEndpoint,
|
||||
compression: CompressionAlgorithm.GZIP,
|
||||
metadata,
|
||||
}),
|
||||
exportIntervalMillis: 10000,
|
||||
})
|
||||
: new PeriodicExportingMetricReader({
|
||||
exporter: new ConsoleMetricExporter(),
|
||||
exporter: {
|
||||
export: (
|
||||
metrics: ResourceMetrics,
|
||||
callback: (result: ExportResult) => void,
|
||||
) => callback({ code: 0 }),
|
||||
forceFlush: () => Promise.resolve(),
|
||||
shutdown: () => Promise.resolve(),
|
||||
},
|
||||
exportIntervalMillis: 10000,
|
||||
});
|
||||
|
||||
@@ -126,7 +152,7 @@ export async function shutdownTelemetry(): Promise<void> {
|
||||
return;
|
||||
}
|
||||
try {
|
||||
ClearcutLogger.getInstance()?.shutdown();
|
||||
// ClearcutLogger.getInstance()?.shutdown();
|
||||
await sdk.shutdown();
|
||||
console.log('OpenTelemetry SDK shut down successfully.');
|
||||
} catch (error) {
|
||||
|
||||
4
packages/vscode-ide-companion/package-lock.json
generated
4
packages/vscode-ide-companion/package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "qwen-code-vscode",
|
||||
"version": "0.0.1-alpha.8",
|
||||
"version": "0.0.1-alpha.12",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "qwen-code-vscode",
|
||||
"version": "0.0.1-alpha.8",
|
||||
"version": "0.0.1-alpha.12",
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.15.1",
|
||||
"cors": "^2.8.5",
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
"name": "@qwen-code/qwen-code-vscode-ide-companion",
|
||||
"displayName": "Qwen Code VSCode IDE Companion",
|
||||
"description": "",
|
||||
"version": "0.0.1-alpha.8",
|
||||
"version": "0.0.1-alpha.12",
|
||||
"engines": {
|
||||
"vscode": "^1.101.0"
|
||||
},
|
||||
|
||||
@@ -77,23 +77,23 @@ if (!argv.s) {
|
||||
execSync('npm run build --workspaces', { stdio: 'inherit' });
|
||||
}
|
||||
|
||||
console.log('packing @google/gemini-cli ...');
|
||||
console.log('packing @qwen-code/qwen-code ...');
|
||||
const cliPackageDir = join('packages', 'cli');
|
||||
rmSync(join(cliPackageDir, 'dist', 'google-gemini-cli-*.tgz'), { force: true });
|
||||
rmSync(join(cliPackageDir, 'dist', 'qwen-code-*.tgz'), { force: true });
|
||||
execSync(
|
||||
`npm pack -w @google/gemini-cli --pack-destination ./packages/cli/dist`,
|
||||
`npm pack -w @qwen-code/qwen-code --pack-destination ./packages/cli/dist`,
|
||||
{
|
||||
stdio: 'ignore',
|
||||
},
|
||||
);
|
||||
|
||||
console.log('packing @google/gemini-cli-core ...');
|
||||
console.log('packing @qwen-code/qwen-code-core ...');
|
||||
const corePackageDir = join('packages', 'core');
|
||||
rmSync(join(corePackageDir, 'dist', 'google-gemini-cli-core-*.tgz'), {
|
||||
rmSync(join(corePackageDir, 'dist', 'qwen-code-core-*.tgz'), {
|
||||
force: true,
|
||||
});
|
||||
execSync(
|
||||
`npm pack -w @google/gemini-cli-core --pack-destination ./packages/core/dist`,
|
||||
`npm pack -w @qwen-code/qwen-code-core --pack-destination ./packages/core/dist`,
|
||||
{ stdio: 'ignore' },
|
||||
);
|
||||
|
||||
@@ -102,11 +102,15 @@ const packageVersion = JSON.parse(
|
||||
).version;
|
||||
|
||||
chmodSync(
|
||||
join(cliPackageDir, 'dist', `google-gemini-cli-${packageVersion}.tgz`),
|
||||
join(cliPackageDir, 'dist', `qwen-code-qwen-code-${packageVersion}.tgz`),
|
||||
0o755,
|
||||
);
|
||||
chmodSync(
|
||||
join(corePackageDir, 'dist', `google-gemini-cli-core-${packageVersion}.tgz`),
|
||||
join(
|
||||
corePackageDir,
|
||||
'dist',
|
||||
`qwen-code-qwen-code-core-${packageVersion}.tgz`,
|
||||
),
|
||||
0o755,
|
||||
);
|
||||
|
||||
@@ -134,14 +138,21 @@ function buildImage(imageName, dockerfile) {
|
||||
{ stdio: buildStdout, shell: '/bin/bash' },
|
||||
);
|
||||
console.log(`built ${finalImageName}`);
|
||||
if (existsSync('/workspace/final_image_uri.txt')) {
|
||||
// The publish step only supports one image. If we build multiple, only the last one
|
||||
// will be published. Throw an error to make this failure explicit.
|
||||
throw new Error(
|
||||
'CI artifact file /workspace/final_image_uri.txt already exists. Refusing to overwrite.',
|
||||
|
||||
// If an output file path was provided via command-line, write the final image URI to it.
|
||||
if (argv.outputFile) {
|
||||
console.log(
|
||||
`Writing final image URI for CI artifact to: ${argv.outputFile}`,
|
||||
);
|
||||
// The publish step only supports one image. If we build multiple, only the last one
|
||||
// will be published. Throw an error to make this failure explicit if the file already exists.
|
||||
if (existsSync(argv.outputFile)) {
|
||||
throw new Error(
|
||||
`CI artifact file ${argv.outputFile} already exists. Refusing to overwrite.`,
|
||||
);
|
||||
}
|
||||
writeFileSync(argv.outputFile, finalImageName);
|
||||
}
|
||||
writeFileSync('/workspace/final_image_uri.txt', finalImageName);
|
||||
}
|
||||
|
||||
if (baseImage && baseDockerfile) {
|
||||
|
||||
Reference in New Issue
Block a user