mirror of
https://github.com/QwenLM/qwen-code.git
synced 2025-12-29 04:59:13 +00:00
Compare commits
11 Commits
update-sys
...
fix/e2e-te
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ffa436e4e5 | ||
|
|
955ad7e4f7 | ||
|
|
718f68d247 | ||
|
|
b8e2852f96 | ||
|
|
df5c4e8079 | ||
|
|
a08bcb2f41 | ||
|
|
8e3b413fdd | ||
|
|
bd0d3479c1 | ||
|
|
dc087deace | ||
|
|
d7890d6463 | ||
|
|
778837507e |
4
.github/workflows/e2e.yml
vendored
4
.github/workflows/e2e.yml
vendored
@@ -44,5 +44,7 @@ jobs:
|
||||
|
||||
- name: Run E2E tests
|
||||
env:
|
||||
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
|
||||
OPENAI_MODEL: ${{ secrets.OPENAI_MODEL }}
|
||||
run: npm run test:integration:${{ matrix.sandbox }} -- --verbose --keep-output
|
||||
|
||||
14
Dockerfile
14
Dockerfile
@@ -1,6 +1,6 @@
|
||||
FROM docker.io/library/node:20-slim
|
||||
|
||||
ARG SANDBOX_NAME="gemini-cli-sandbox"
|
||||
ARG SANDBOX_NAME="qwen-code-sandbox"
|
||||
ARG CLI_VERSION_ARG
|
||||
ENV SANDBOX="$SANDBOX_NAME"
|
||||
ENV CLI_VERSION=$CLI_VERSION_ARG
|
||||
@@ -39,12 +39,12 @@ ENV PATH=$PATH:/usr/local/share/npm-global/bin
|
||||
# switch to non-root user node
|
||||
USER node
|
||||
|
||||
# install gemini-cli and clean up
|
||||
COPY packages/cli/dist/google-gemini-cli-*.tgz /usr/local/share/npm-global/gemini-cli.tgz
|
||||
COPY packages/core/dist/google-gemini-cli-core-*.tgz /usr/local/share/npm-global/gemini-core.tgz
|
||||
RUN npm install -g /usr/local/share/npm-global/gemini-cli.tgz /usr/local/share/npm-global/gemini-core.tgz \
|
||||
# install qwen-code and clean up
|
||||
COPY packages/cli/dist/qwen-code-*.tgz /usr/local/share/npm-global/qwen-code.tgz
|
||||
COPY packages/core/dist/qwen-code-qwen-code-core-*.tgz /usr/local/share/npm-global/qwen-code-core.tgz
|
||||
RUN npm install -g /usr/local/share/npm-global/qwen-code.tgz /usr/local/share/npm-global/qwen-code-core.tgz \
|
||||
&& npm cache clean --force \
|
||||
&& rm -f /usr/local/share/npm-global/gemini-{cli,core}.tgz
|
||||
&& rm -f /usr/local/share/npm-global/qwen-{code,code-core}.tgz
|
||||
|
||||
# default entrypoint when none specified
|
||||
CMD ["gemini"]
|
||||
CMD ["qwen"]
|
||||
47
README.md
47
README.md
@@ -5,7 +5,7 @@
|
||||
Qwen Code is a command-line AI workflow tool adapted from [**Gemini CLI**](https://github.com/google-gemini/gemini-cli) (Please refer to [this document](./README.gemini.md) for more details), optimized for [Qwen3-Coder](https://github.com/QwenLM/Qwen3-Coder) models with enhanced parser support & tool support.
|
||||
|
||||
> [!WARNING]
|
||||
> Qwen Code may issue multiple API calls per cycle, resulting in higher token usage, similar to Claude Code. We’re actively working to enhance API efficiency and improve the overall developer experience.
|
||||
> Qwen Code may issue multiple API calls per cycle, resulting in higher token usage, similar to Claude Code. We’re actively working to enhance API efficiency and improve the overall developer experience. ModelScope offers 2,000 free API calls if you are in China mainland. Please check [API config section](#api-configuration) for more details.
|
||||
|
||||
## Key Features
|
||||
|
||||
@@ -26,7 +26,7 @@ curl -qL https://www.npmjs.com/install.sh | sh
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
npm install -g @qwen-code/qwen-code
|
||||
npm install -g @qwen-code/qwen-code@latest
|
||||
qwen --version
|
||||
```
|
||||
|
||||
@@ -45,22 +45,47 @@ npm install
|
||||
npm install -g .
|
||||
```
|
||||
|
||||
We now support max session token limit, you can set it in your `.qwen/settings.json` file to save the token usage.
|
||||
For example, if you want to set the max session token limit to 32000, you can set it like this:
|
||||
|
||||
```json
|
||||
{
|
||||
"maxSessionToken": 32000
|
||||
}
|
||||
```
|
||||
|
||||
The max session means the maximum number of tokens that can be used in one chat (not the total usage during multiple tool call shoots); if you reach the limit, you can use the `/compress` command to compress the history and go on, or use `/clear` command to clear the history.
|
||||
|
||||
### API Configuration
|
||||
|
||||
Set your Qwen API key (In Qwen Code project, you can also set your API key in `.env` file). the `.env` file should be placed in the root directory of your current project.
|
||||
|
||||
> ⚠️ **Notice:** <br>
|
||||
> **If you are in mainland China, please go to https://bailian.console.aliyun.com/ to apply for your API key** <br>
|
||||
> **If you are in mainland China, please go to https://bailian.console.aliyun.com/ or https://modelscope.cn/docs/model-service/API-Inference/intro to apply for your API key** <br>
|
||||
> **If you are not in mainland China, please go to https://modelstudio.console.alibabacloud.com/ to apply for your API key**
|
||||
|
||||
If you are in mainland China, you can use Qwen3-Coder through the Alibaba Cloud bailian platform.
|
||||
|
||||
```bash
|
||||
# If you are in mainland China, use the following URL:
|
||||
# https://dashscope.aliyuncs.com/compatible-mode/v1
|
||||
# If you are not in mainland China, use the following URL:
|
||||
# https://dashscope-intl.aliyuncs.com/compatible-mode/v1
|
||||
export OPENAI_API_KEY="your_api_key_here"
|
||||
export OPENAI_BASE_URL="your_api_base_url_here"
|
||||
export OPENAI_MODEL="your_api_model_here"
|
||||
export OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
|
||||
export OPENAI_MODEL="qwen3-coder-plus"
|
||||
```
|
||||
|
||||
If you are in mainland China, ModelScope offers 2,000 free model inference API calls per day:
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY="your_api_key_here"
|
||||
export OPENAI_BASE_URL="https://api-inference.modelscope.cn/v1"
|
||||
export OPENAI_MODEL="Qwen/Qwen3-Coder-480B-A35B-Instruct"
|
||||
```
|
||||
|
||||
If you are not in mainland China, you can use Qwen3-Coder through the Alibaba Cloud modelstuido platform.
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY="your_api_key_here"
|
||||
export OPENAI_BASE_URL="https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
|
||||
export OPENAI_MODEL="qwen3-coder-plus"
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
@@ -148,3 +173,7 @@ This project is based on [Google Gemini CLI](https://github.com/google-gemini/ge
|
||||
## License
|
||||
|
||||
[LICENSE](./LICENSE)
|
||||
|
||||
## Star History
|
||||
|
||||
[](https://www.star-history.com/#QwenLM/qwen-code&Date)
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
# CLI Commands
|
||||
|
||||
Gemini CLI supports several built-in commands to help you manage your session, customize the interface, and control its behavior. These commands are prefixed with a forward slash (`/`), an at symbol (`@`), or an exclamation mark (`!`).
|
||||
Qwen Code supports several built-in commands to help you manage your session, customize the interface, and control its behavior. These commands are prefixed with a forward slash (`/`), an at symbol (`@`), or an exclamation mark (`!`).
|
||||
|
||||
## Slash commands (`/`)
|
||||
|
||||
Slash commands provide meta-level control over the CLI itself.
|
||||
|
||||
- **`/bug`**
|
||||
- **Description:** File an issue about Gemini CLI. By default, the issue is filed within the GitHub repository for Gemini CLI. The string you enter after `/bug` will become the headline for the bug being filed. The default `/bug` behavior can be modified using the `bugCommand` setting in your `.qwen/settings.json` files.
|
||||
- **Description:** File an issue about Qwen Code. By default, the issue is filed within the GitHub repository for Qwen Code. The string you enter after `/bug` will become the headline for the bug being filed. The default `/bug` behavior can be modified using the `bugCommand` setting in your `.qwen/settings.json` files.
|
||||
|
||||
- **`/chat`**
|
||||
- **Description:** Save and resume conversation history for branching conversation state interactively, or resuming a previous state from a later session.
|
||||
@@ -32,10 +32,10 @@ Slash commands provide meta-level control over the CLI itself.
|
||||
- **Description:** Open a dialog for selecting supported editors.
|
||||
|
||||
- **`/extensions`**
|
||||
- **Description:** Lists all active extensions in the current Gemini CLI session. See [Gemini CLI Extensions](../extension.md).
|
||||
- **Description:** Lists all active extensions in the current Qwen Code session. See [Qwen Code Extensions](../extension.md).
|
||||
|
||||
- **`/help`** (or **`/?`**)
|
||||
- **Description:** Display help information about the Gemini CLI, including available commands and their usage.
|
||||
- **Description:** Display help information about the Qwen Code, including available commands and their usage.
|
||||
|
||||
- **`/mcp`**
|
||||
- **Description:** List configured Model Context Protocol (MCP) servers, their connection status, server details, and available tools.
|
||||
@@ -65,10 +65,10 @@ Slash commands provide meta-level control over the CLI itself.
|
||||
- **Note:** Only available if the CLI is invoked with the `--checkpointing` option or configured via [settings](./configuration.md). See [Checkpointing documentation](../checkpointing.md) for more details.
|
||||
|
||||
- **`/stats`**
|
||||
- **Description:** Display detailed statistics for the current Gemini CLI session, including token usage, cached token savings (when available), and session duration. Note: Cached token information is only displayed when cached tokens are being used, which occurs with API key authentication but not with OAuth authentication at this time.
|
||||
- **Description:** Display detailed statistics for the current Qwen Code session, including token usage, cached token savings (when available), and session duration. Note: Cached token information is only displayed when cached tokens are being used, which occurs with API key authentication but not with OAuth authentication at this time.
|
||||
|
||||
- [**`/theme`**](./themes.md)
|
||||
- **Description:** Open a dialog that lets you change the visual theme of Gemini CLI.
|
||||
- **Description:** Open a dialog that lets you change the visual theme of Qwen Code.
|
||||
|
||||
- **`/auth`**
|
||||
- **Description:** Open a dialog that lets you change the authentication method.
|
||||
@@ -77,7 +77,7 @@ Slash commands provide meta-level control over the CLI itself.
|
||||
- **Description:** Show version info. Please share this information when filing issues.
|
||||
|
||||
- [**`/tools`**](../tools/index.md)
|
||||
- **Description:** Display a list of tools that are currently available within Gemini CLI.
|
||||
- **Description:** Display a list of tools that are currently available within Qwen Code.
|
||||
- **Sub-commands:**
|
||||
- **`desc`** or **`descriptions`**:
|
||||
- **Description:** Show detailed descriptions of each tool, including each tool's name with its full description as provided to the model.
|
||||
@@ -88,7 +88,7 @@ Slash commands provide meta-level control over the CLI itself.
|
||||
- **Description:** Display the Privacy Notice and allow users to select whether they consent to the collection of their data for service improvement purposes.
|
||||
|
||||
- **`/quit`** (or **`/exit`**)
|
||||
- **Description:** Exit Gemini CLI.
|
||||
- **Description:** Exit Qwen Code.
|
||||
|
||||
## At commands (`@`)
|
||||
|
||||
@@ -119,13 +119,13 @@ At commands are used to include the content of files or directories as part of y
|
||||
|
||||
## Shell mode & passthrough commands (`!`)
|
||||
|
||||
The `!` prefix lets you interact with your system's shell directly from within Gemini CLI.
|
||||
The `!` prefix lets you interact with your system's shell directly from within Qwen Code.
|
||||
|
||||
- **`!<shell_command>`**
|
||||
- **Description:** Execute the given `<shell_command>` in your system's default shell. Any output or errors from the command are displayed in the terminal.
|
||||
- **Examples:**
|
||||
- `!ls -la` (executes `ls -la` and returns to Gemini CLI)
|
||||
- `!git status` (executes `git status` and returns to Gemini CLI)
|
||||
- `!ls -la` (executes `ls -la` and returns to Qwen Code)
|
||||
- `!git status` (executes `git status` and returns to Qwen Code)
|
||||
|
||||
- **`!` (Toggle shell mode)**
|
||||
- **Description:** Typing `!` on its own toggles shell mode.
|
||||
@@ -133,6 +133,6 @@ The `!` prefix lets you interact with your system's shell directly from within G
|
||||
- When active, shell mode uses a different coloring and a "Shell Mode Indicator".
|
||||
- While in shell mode, text you type is interpreted directly as a shell command.
|
||||
- **Exiting shell mode:**
|
||||
- When exited, the UI reverts to its standard appearance and normal Gemini CLI behavior resumes.
|
||||
- When exited, the UI reverts to its standard appearance and normal Qwen Code behavior resumes.
|
||||
|
||||
- **Caution for all `!` usage:** Commands you execute in shell mode have the same permissions and impact as if you ran them directly in your terminal.
|
||||
|
||||
7
package-lock.json
generated
7
package-lock.json
generated
@@ -10582,6 +10582,12 @@
|
||||
"tslib": "^2"
|
||||
}
|
||||
},
|
||||
"node_modules/tiktoken": {
|
||||
"version": "1.0.21",
|
||||
"resolved": "https://registry.npmjs.org/tiktoken/-/tiktoken-1.0.21.tgz",
|
||||
"integrity": "sha512-/kqtlepLMptX0OgbYD9aMYbM7EFrMZCL7EoHM8Psmg2FuhXoo/bH64KqOiZGGwa6oS9TPdSEDKBnV2LuB8+5vQ==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/tinybench": {
|
||||
"version": "2.9.0",
|
||||
"resolved": "https://registry.npmjs.org/tinybench/-/tinybench-2.9.0.tgz",
|
||||
@@ -12143,6 +12149,7 @@
|
||||
"shell-quote": "^1.8.3",
|
||||
"simple-git": "^3.28.0",
|
||||
"strip-ansi": "^7.1.0",
|
||||
"tiktoken": "^1.0.21",
|
||||
"undici": "^7.10.0",
|
||||
"ws": "^8.18.0"
|
||||
},
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
"url": "git+http://gitlab.alibaba-inc.com/Qwen-Coder/qwen-code.git"
|
||||
},
|
||||
"config": {
|
||||
"sandboxImageUri": "us-docker.pkg.dev/gemini-code-dev/gemini-cli/sandbox:0.0.1-alpha.8"
|
||||
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.0.1-alpha.8"
|
||||
},
|
||||
"scripts": {
|
||||
"start": "node scripts/start.js",
|
||||
|
||||
@@ -25,7 +25,7 @@
|
||||
"dist"
|
||||
],
|
||||
"config": {
|
||||
"sandboxImageUri": "us-docker.pkg.dev/gemini-code-dev/gemini-cli/sandbox:0.0.1-alpha.8"
|
||||
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.0.1-alpha.8"
|
||||
},
|
||||
"dependencies": {
|
||||
"@qwen-code/qwen-code-core": "file:../core",
|
||||
|
||||
@@ -382,6 +382,8 @@ export async function loadCliConfig(
|
||||
model: argv.model!,
|
||||
extensionContextFilePaths,
|
||||
maxSessionTurns: settings.maxSessionTurns ?? -1,
|
||||
sessionTokenLimit: settings.sessionTokenLimit ?? 32000,
|
||||
maxFolderItems: settings.maxFolderItems ?? 20,
|
||||
listExtensions: argv.listExtensions || false,
|
||||
activeExtensions: activeExtensions.map((e) => ({
|
||||
name: e.config.name,
|
||||
|
||||
@@ -85,6 +85,12 @@ export interface Settings {
|
||||
// Setting for setting maximum number of user/model/tool turns in a session.
|
||||
maxSessionTurns?: number;
|
||||
|
||||
// Setting for maximum token limit for conversation history before blocking requests
|
||||
sessionTokenLimit?: number;
|
||||
|
||||
// Setting for maximum number of files and folders to show in folder structure
|
||||
maxFolderItems?: number;
|
||||
|
||||
// Sampling parameters for content generation
|
||||
sampling_params?: {
|
||||
top_p?: number;
|
||||
|
||||
@@ -323,16 +323,34 @@ async function validateNonInterActiveAuth(
|
||||
nonInteractiveConfig: Config,
|
||||
) {
|
||||
// making a special case for the cli. many headless environments might not have a settings.json set
|
||||
// so if GEMINI_API_KEY is set, we'll use that. However since the oauth things are interactive anyway, we'll
|
||||
// so if GEMINI_API_KEY or OPENAI_API_KEY is set, we'll use that. However since the oauth things are interactive anyway, we'll
|
||||
// still expect that exists
|
||||
if (!selectedAuthType && !process.env.GEMINI_API_KEY) {
|
||||
if (
|
||||
!selectedAuthType &&
|
||||
!process.env.GEMINI_API_KEY &&
|
||||
!process.env.OPENAI_API_KEY
|
||||
) {
|
||||
console.error(
|
||||
`Please set an Auth method in your ${USER_SETTINGS_PATH} OR specify GEMINI_API_KEY env variable file before running`,
|
||||
`Please set an Auth method in your ${USER_SETTINGS_PATH} OR specify GEMINI_API_KEY or OPENAI_API_KEY env variable before running`,
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
selectedAuthType = selectedAuthType || AuthType.USE_GEMINI;
|
||||
// Determine auth type based on available environment variables
|
||||
if (!selectedAuthType) {
|
||||
if (process.env.OPENAI_API_KEY) {
|
||||
selectedAuthType = AuthType.USE_OPENAI;
|
||||
} else if (process.env.GEMINI_API_KEY) {
|
||||
selectedAuthType = AuthType.USE_GEMINI;
|
||||
}
|
||||
}
|
||||
|
||||
// This should never happen due to the check above, but TypeScript needs assurance
|
||||
if (!selectedAuthType) {
|
||||
console.error('No valid authentication method found');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const err = validateAuthMethod(selectedAuthType);
|
||||
if (err != null) {
|
||||
console.error(err);
|
||||
|
||||
@@ -60,7 +60,9 @@ export const createMockCommandContext = (
|
||||
byName: {},
|
||||
},
|
||||
},
|
||||
promptCount: 0,
|
||||
} as SessionStatsState,
|
||||
resetSession: vi.fn(),
|
||||
},
|
||||
};
|
||||
|
||||
|
||||
@@ -43,17 +43,22 @@ describe('clearCommand', () => {
|
||||
|
||||
expect(mockResetChat).toHaveBeenCalledTimes(1);
|
||||
|
||||
expect(mockContext.session.resetSession).toHaveBeenCalledTimes(1);
|
||||
|
||||
expect(mockContext.ui.clear).toHaveBeenCalledTimes(1);
|
||||
|
||||
// Check the order of operations.
|
||||
const setDebugMessageOrder = (mockContext.ui.setDebugMessage as Mock).mock
|
||||
.invocationCallOrder[0];
|
||||
const resetChatOrder = mockResetChat.mock.invocationCallOrder[0];
|
||||
const resetSessionOrder = (mockContext.session.resetSession as Mock).mock
|
||||
.invocationCallOrder[0];
|
||||
const clearOrder = (mockContext.ui.clear as Mock).mock
|
||||
.invocationCallOrder[0];
|
||||
|
||||
expect(setDebugMessageOrder).toBeLessThan(resetChatOrder);
|
||||
expect(resetChatOrder).toBeLessThan(clearOrder);
|
||||
expect(resetChatOrder).toBeLessThan(resetSessionOrder);
|
||||
expect(resetSessionOrder).toBeLessThan(clearOrder);
|
||||
});
|
||||
|
||||
it('should not attempt to reset chat if config service is not available', async () => {
|
||||
@@ -73,6 +78,7 @@ describe('clearCommand', () => {
|
||||
'Clearing terminal and resetting chat.',
|
||||
);
|
||||
expect(mockResetChat).not.toHaveBeenCalled();
|
||||
expect(nullConfigContext.session.resetSession).toHaveBeenCalledTimes(1);
|
||||
expect(nullConfigContext.ui.clear).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -12,6 +12,7 @@ export const clearCommand: SlashCommand = {
|
||||
action: async (context, _args) => {
|
||||
context.ui.setDebugMessage('Clearing terminal and resetting chat.');
|
||||
await context.services.config?.getGeminiClient()?.resetChat();
|
||||
context.session.resetSession();
|
||||
context.ui.clear();
|
||||
},
|
||||
};
|
||||
|
||||
@@ -38,6 +38,7 @@ export interface CommandContext {
|
||||
// Session-specific data
|
||||
session: {
|
||||
stats: SessionStatsState;
|
||||
resetSession: () => void;
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ export const AboutBox: React.FC<AboutBoxProps> = ({
|
||||
>
|
||||
<Box marginBottom={1}>
|
||||
<Text bold color={Colors.AccentPurple}>
|
||||
About Gemini CLI
|
||||
About Qwen Code
|
||||
</Text>
|
||||
</Box>
|
||||
<Box flexDirection="row">
|
||||
|
||||
@@ -63,7 +63,7 @@ describe('<HistoryItemDisplay />', () => {
|
||||
const { lastFrame } = render(
|
||||
<HistoryItemDisplay {...baseItem} item={item} />,
|
||||
);
|
||||
expect(lastFrame()).toContain('About Gemini CLI');
|
||||
expect(lastFrame()).toContain('About Qwen Code');
|
||||
});
|
||||
|
||||
it('renders ModelStatsDisplay for "model_stats" type', () => {
|
||||
|
||||
@@ -50,6 +50,7 @@ interface SessionStatsContextValue {
|
||||
stats: SessionStatsState;
|
||||
startNewPrompt: () => void;
|
||||
getPromptCount: () => number;
|
||||
resetSession: () => void;
|
||||
}
|
||||
|
||||
// --- Context Definition ---
|
||||
@@ -109,13 +110,23 @@ export const SessionStatsProvider: React.FC<{ children: React.ReactNode }> = ({
|
||||
[stats.promptCount],
|
||||
);
|
||||
|
||||
const resetSession = useCallback(() => {
|
||||
setStats({
|
||||
sessionStartTime: new Date(),
|
||||
metrics: uiTelemetryService.getMetrics(),
|
||||
lastPromptTokenCount: uiTelemetryService.getLastPromptTokenCount(),
|
||||
promptCount: 0,
|
||||
});
|
||||
}, []);
|
||||
|
||||
const value = useMemo(
|
||||
() => ({
|
||||
stats,
|
||||
startNewPrompt,
|
||||
getPromptCount,
|
||||
resetSession,
|
||||
}),
|
||||
[stats, startNewPrompt, getPromptCount],
|
||||
[stats, startNewPrompt, getPromptCount, resetSession],
|
||||
);
|
||||
|
||||
return (
|
||||
|
||||
@@ -554,7 +554,7 @@ describe('useSlashCommandProcessor', () => {
|
||||
* **Memory Usage:** ${memoryUsage}
|
||||
`;
|
||||
let url =
|
||||
'https://github.com/google-gemini/gemini-cli/issues/new?template=bug_report.yml';
|
||||
'https://github.com/QwenLM/Qwen-Code/issues/new?template=bug_report.yml';
|
||||
if (description) {
|
||||
url += `&title=${encodeURIComponent(description)}`;
|
||||
}
|
||||
|
||||
@@ -172,6 +172,7 @@ export const useSlashCommandProcessor = (
|
||||
},
|
||||
session: {
|
||||
stats: session.stats,
|
||||
resetSession: session.resetSession,
|
||||
},
|
||||
}),
|
||||
[
|
||||
@@ -183,6 +184,7 @@ export const useSlashCommandProcessor = (
|
||||
clearItems,
|
||||
refreshStatic,
|
||||
session.stats,
|
||||
session.resetSession,
|
||||
onDebugMessage,
|
||||
],
|
||||
);
|
||||
@@ -538,7 +540,7 @@ export const useSlashCommandProcessor = (
|
||||
// Filter out MCP tools by checking if they have a serverName property
|
||||
const geminiTools = tools.filter((tool) => !('serverName' in tool));
|
||||
|
||||
let message = 'Available Gemini CLI tools:\n\n';
|
||||
let message = 'Available Qwen Code tools:\n\n';
|
||||
|
||||
if (geminiTools.length > 0) {
|
||||
geminiTools.forEach((tool) => {
|
||||
@@ -618,7 +620,7 @@ export const useSlashCommandProcessor = (
|
||||
`;
|
||||
|
||||
let bugReportUrl =
|
||||
'https://github.com/google-gemini/gemini-cli/issues/new?template=bug_report.yml&title={title}&info={info}';
|
||||
'https://github.com/QwenLM/Qwen-Code/issues/new?template=bug_report.yml&title={title}&info={info}';
|
||||
const bugCommand = config?.getBugCommand();
|
||||
if (bugCommand?.urlTemplate) {
|
||||
bugReportUrl = bugCommand.urlTemplate;
|
||||
|
||||
@@ -452,6 +452,23 @@ export const useGeminiStream = (
|
||||
[addItem, config],
|
||||
);
|
||||
|
||||
const handleSessionTokenLimitExceededEvent = useCallback(
|
||||
(value: { currentTokens: number; limit: number; message: string }) =>
|
||||
addItem(
|
||||
{
|
||||
type: 'error',
|
||||
text:
|
||||
`🚫 Session token limit exceeded: ${value.currentTokens.toLocaleString()} tokens > ${value.limit.toLocaleString()} limit.\n\n` +
|
||||
`💡 Solutions:\n` +
|
||||
` • Start a new session: Use /clear command\n` +
|
||||
` • Increase limit: Add "sessionTokenLimit": (e.g., 128000) to your settings.json\n` +
|
||||
` • Compress history: Use /compress command to compress history`,
|
||||
},
|
||||
Date.now(),
|
||||
),
|
||||
[addItem],
|
||||
);
|
||||
|
||||
const handleLoopDetectedEvent = useCallback(() => {
|
||||
addItem(
|
||||
{
|
||||
@@ -501,6 +518,9 @@ export const useGeminiStream = (
|
||||
case ServerGeminiEventType.MaxSessionTurns:
|
||||
handleMaxSessionTurnsEvent();
|
||||
break;
|
||||
case ServerGeminiEventType.SessionTokenLimitExceeded:
|
||||
handleSessionTokenLimitExceededEvent(event.value);
|
||||
break;
|
||||
case ServerGeminiEventType.LoopDetected:
|
||||
// handle later because we want to move pending history to history
|
||||
// before we add loop detected message to history
|
||||
@@ -525,6 +545,7 @@ export const useGeminiStream = (
|
||||
scheduleToolCalls,
|
||||
handleChatCompressionEvent,
|
||||
handleMaxSessionTurnsEvent,
|
||||
handleSessionTokenLimitExceededEvent,
|
||||
],
|
||||
);
|
||||
|
||||
|
||||
@@ -31,9 +31,9 @@ function getContainerPath(hostPath: string): string {
|
||||
return hostPath;
|
||||
}
|
||||
|
||||
const LOCAL_DEV_SANDBOX_IMAGE_NAME = 'gemini-cli-sandbox';
|
||||
const SANDBOX_NETWORK_NAME = 'gemini-cli-sandbox';
|
||||
const SANDBOX_PROXY_NAME = 'gemini-cli-sandbox-proxy';
|
||||
const LOCAL_DEV_SANDBOX_IMAGE_NAME = 'qwen-code-sandbox';
|
||||
const SANDBOX_NETWORK_NAME = 'qwen-code-sandbox';
|
||||
const SANDBOX_PROXY_NAME = 'qwen-code-sandbox-proxy';
|
||||
const BUILTIN_SEATBELT_PROFILES = [
|
||||
'permissive-open',
|
||||
'permissive-closed',
|
||||
@@ -172,8 +172,8 @@ function entrypoint(workdir: string): string[] {
|
||||
? 'npm run debug --'
|
||||
: 'npm rebuild && npm run start --'
|
||||
: process.env.DEBUG
|
||||
? `node --inspect-brk=0.0.0.0:${process.env.DEBUG_PORT || '9229'} $(which gemini)`
|
||||
: 'gemini';
|
||||
? `node --inspect-brk=0.0.0.0:${process.env.DEBUG_PORT || '9229'} $(which qwen)`
|
||||
: 'qwen';
|
||||
|
||||
const args = [...shellCmds, cliCmd, ...cliArgs];
|
||||
|
||||
@@ -517,6 +517,17 @@ export async function start_sandbox(
|
||||
args.push('--env', `GOOGLE_API_KEY=${process.env.GOOGLE_API_KEY}`);
|
||||
}
|
||||
|
||||
// copy OPENAI_API_KEY and related env vars for Qwen
|
||||
if (process.env.OPENAI_API_KEY) {
|
||||
args.push('--env', `OPENAI_API_KEY=${process.env.OPENAI_API_KEY}`);
|
||||
}
|
||||
if (process.env.OPENAI_BASE_URL) {
|
||||
args.push('--env', `OPENAI_BASE_URL=${process.env.OPENAI_BASE_URL}`);
|
||||
}
|
||||
if (process.env.OPENAI_MODEL) {
|
||||
args.push('--env', `OPENAI_MODEL=${process.env.OPENAI_MODEL}`);
|
||||
}
|
||||
|
||||
// copy GOOGLE_GENAI_USE_VERTEXAI
|
||||
if (process.env.GOOGLE_GENAI_USE_VERTEXAI) {
|
||||
args.push(
|
||||
|
||||
@@ -44,6 +44,7 @@
|
||||
"shell-quote": "^1.8.3",
|
||||
"simple-git": "^3.28.0",
|
||||
"strip-ansi": "^7.1.0",
|
||||
"tiktoken": "^1.0.21",
|
||||
"undici": "^7.10.0",
|
||||
"ws": "^8.18.0"
|
||||
},
|
||||
|
||||
@@ -56,7 +56,7 @@ export interface HttpOptions {
|
||||
headers?: Record<string, string>;
|
||||
}
|
||||
|
||||
export const CODE_ASSIST_ENDPOINT = 'https://cloudcode-pa.googleapis.com';
|
||||
export const CODE_ASSIST_ENDPOINT = 'https://localhost:0'; // Disable Google Code Assist API Request
|
||||
export const CODE_ASSIST_API_VERSION = 'v1internal';
|
||||
|
||||
export class CodeAssistServer implements ContentGenerator {
|
||||
|
||||
@@ -140,6 +140,8 @@ export interface ConfigParameters {
|
||||
model: string;
|
||||
extensionContextFilePaths?: string[];
|
||||
maxSessionTurns?: number;
|
||||
sessionTokenLimit?: number;
|
||||
maxFolderItems?: number;
|
||||
listExtensions?: boolean;
|
||||
activeExtensions?: ActiveExtension[];
|
||||
noBrowser?: boolean;
|
||||
@@ -216,6 +218,8 @@ export class Config {
|
||||
}>;
|
||||
private modelSwitchedDuringSession: boolean = false;
|
||||
private readonly maxSessionTurns: number;
|
||||
private readonly sessionTokenLimit: number;
|
||||
private readonly maxFolderItems: number;
|
||||
private readonly listExtensions: boolean;
|
||||
private readonly _activeExtensions: ActiveExtension[];
|
||||
flashFallbackHandler?: FlashFallbackHandler;
|
||||
@@ -262,6 +266,8 @@ export class Config {
|
||||
this.model = params.model;
|
||||
this.extensionContextFilePaths = params.extensionContextFilePaths ?? [];
|
||||
this.maxSessionTurns = params.maxSessionTurns ?? -1;
|
||||
this.sessionTokenLimit = params.sessionTokenLimit ?? 32000;
|
||||
this.maxFolderItems = params.maxFolderItems ?? 20;
|
||||
this.listExtensions = params.listExtensions ?? false;
|
||||
this._activeExtensions = params.activeExtensions ?? [];
|
||||
this.noBrowser = params.noBrowser ?? false;
|
||||
@@ -353,6 +359,14 @@ export class Config {
|
||||
return this.maxSessionTurns;
|
||||
}
|
||||
|
||||
getSessionTokenLimit(): number {
|
||||
return this.sessionTokenLimit;
|
||||
}
|
||||
|
||||
getMaxFolderItems(): number {
|
||||
return this.maxFolderItems;
|
||||
}
|
||||
|
||||
setQuotaErrorOccurred(value: boolean): void {
|
||||
this.quotaErrorOccurred = value;
|
||||
}
|
||||
@@ -516,7 +530,7 @@ export class Config {
|
||||
}
|
||||
|
||||
getUsageStatisticsEnabled(): boolean {
|
||||
return this.usageStatisticsEnabled;
|
||||
return false; // 禁用遥测统计,防止网络请求
|
||||
}
|
||||
|
||||
getExtensionContextFilePaths(): string[] {
|
||||
|
||||
@@ -4,6 +4,6 @@
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
export const DEFAULT_GEMINI_MODEL = 'qwen3-coder-max';
|
||||
export const DEFAULT_GEMINI_MODEL = 'qwen3-coder-plus';
|
||||
export const DEFAULT_GEMINI_FLASH_MODEL = 'gemini-2.5-flash';
|
||||
export const DEFAULT_GEMINI_EMBEDDING_MODEL = 'gemini-embedding-001';
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -195,6 +195,8 @@ describe('Gemini Client (client.ts)', () => {
|
||||
getWorkingDir: vi.fn().mockReturnValue('/test/dir'),
|
||||
getFileService: vi.fn().mockReturnValue(fileService),
|
||||
getMaxSessionTurns: vi.fn().mockReturnValue(0),
|
||||
getSessionTokenLimit: vi.fn().mockReturnValue(32000),
|
||||
getMaxFolderItems: vi.fn().mockReturnValue(20),
|
||||
getQuotaErrorOccurred: vi.fn().mockReturnValue(false),
|
||||
setQuotaErrorOccurred: vi.fn(),
|
||||
getNoBrowser: vi.fn().mockReturnValue(false),
|
||||
|
||||
@@ -167,6 +167,7 @@ export class GeminiClient {
|
||||
const platform = process.platform;
|
||||
const folderStructure = await getFolderStructure(cwd, {
|
||||
fileService: this.config.getFileService(),
|
||||
maxItems: this.config.getMaxFolderItems(),
|
||||
});
|
||||
const context = `
|
||||
This is the Qwen Code. We are setting up the context for our chat.
|
||||
@@ -306,6 +307,49 @@ export class GeminiClient {
|
||||
if (compressed) {
|
||||
yield { type: GeminiEventType.ChatCompressed, value: compressed };
|
||||
}
|
||||
|
||||
// Check session token limit after compression using accurate token counting
|
||||
const sessionTokenLimit = this.config.getSessionTokenLimit();
|
||||
if (sessionTokenLimit > 0) {
|
||||
// Get all the content that would be sent in an API call
|
||||
const currentHistory = this.getChat().getHistory(true);
|
||||
const userMemory = this.config.getUserMemory();
|
||||
const systemPrompt = getCoreSystemPrompt(userMemory);
|
||||
const environment = await this.getEnvironment();
|
||||
|
||||
// Create a mock request content to count total tokens
|
||||
const mockRequestContent = [
|
||||
{
|
||||
role: 'system' as const,
|
||||
parts: [{ text: systemPrompt }, ...environment],
|
||||
},
|
||||
...currentHistory,
|
||||
];
|
||||
|
||||
// Use the improved countTokens method for accurate counting
|
||||
const { totalTokens: totalRequestTokens } =
|
||||
await this.getContentGenerator().countTokens({
|
||||
model: this.config.getModel(),
|
||||
contents: mockRequestContent,
|
||||
});
|
||||
|
||||
if (
|
||||
totalRequestTokens !== undefined &&
|
||||
totalRequestTokens > sessionTokenLimit
|
||||
) {
|
||||
yield {
|
||||
type: GeminiEventType.SessionTokenLimitExceeded,
|
||||
value: {
|
||||
currentTokens: totalRequestTokens,
|
||||
limit: sessionTokenLimit,
|
||||
message:
|
||||
`Session token limit exceeded: ${totalRequestTokens} tokens > ${sessionTokenLimit} limit. ` +
|
||||
'Please start a new session or increase the sessionTokenLimit in your settings.json.',
|
||||
},
|
||||
};
|
||||
return new Turn(this.getChat(), prompt_id);
|
||||
}
|
||||
}
|
||||
const turn = new Turn(this.getChat(), prompt_id);
|
||||
const resultStream = turn.run(request, signal);
|
||||
for await (const event of resultStream) {
|
||||
|
||||
@@ -116,7 +116,8 @@ export async function createContentGeneratorConfig(
|
||||
|
||||
if (authType === AuthType.USE_OPENAI && openaiApiKey) {
|
||||
contentGeneratorConfig.apiKey = openaiApiKey;
|
||||
contentGeneratorConfig.model = process.env.OPENAI_MODEL || '';
|
||||
contentGeneratorConfig.model =
|
||||
process.env.OPENAI_MODEL || DEFAULT_GEMINI_MODEL;
|
||||
|
||||
return contentGeneratorConfig;
|
||||
}
|
||||
|
||||
@@ -4,10 +4,7 @@
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import {
|
||||
DEFAULT_GEMINI_MODEL,
|
||||
DEFAULT_GEMINI_FLASH_MODEL,
|
||||
} from '../config/models.js';
|
||||
// 移除未使用的导入
|
||||
|
||||
/**
|
||||
* Checks if the default "pro" model is rate-limited and returns a fallback "flash"
|
||||
@@ -18,51 +15,9 @@ import {
|
||||
* and the original model if a switch happened.
|
||||
*/
|
||||
export async function getEffectiveModel(
|
||||
apiKey: string,
|
||||
_apiKey: string,
|
||||
currentConfiguredModel: string,
|
||||
): Promise<string> {
|
||||
if (currentConfiguredModel !== DEFAULT_GEMINI_MODEL) {
|
||||
// Only check if the user is trying to use the specific pro model we want to fallback from.
|
||||
return currentConfiguredModel;
|
||||
}
|
||||
|
||||
const modelToTest = DEFAULT_GEMINI_MODEL;
|
||||
const fallbackModel = DEFAULT_GEMINI_FLASH_MODEL;
|
||||
const endpoint = `https://generativelanguage.googleapis.com/v1beta/models/${modelToTest}:generateContent?key=${apiKey}`;
|
||||
const body = JSON.stringify({
|
||||
contents: [{ parts: [{ text: 'test' }] }],
|
||||
generationConfig: {
|
||||
maxOutputTokens: 1,
|
||||
temperature: 0,
|
||||
topK: 1,
|
||||
thinkingConfig: { thinkingBudget: 128, includeThoughts: false },
|
||||
},
|
||||
});
|
||||
|
||||
const controller = new AbortController();
|
||||
const timeoutId = setTimeout(() => controller.abort(), 2000); // 500ms timeout for the request
|
||||
|
||||
try {
|
||||
const response = await fetch(endpoint, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body,
|
||||
signal: controller.signal,
|
||||
});
|
||||
|
||||
clearTimeout(timeoutId);
|
||||
|
||||
if (response.status === 429) {
|
||||
console.log(
|
||||
`[INFO] Your configured model (${modelToTest}) was temporarily unavailable. Switched to ${fallbackModel} for this session.`,
|
||||
);
|
||||
return fallbackModel;
|
||||
}
|
||||
// For any other case (success, other error codes), we stick to the original model.
|
||||
return currentConfiguredModel;
|
||||
} catch (_error) {
|
||||
clearTimeout(timeoutId);
|
||||
// On timeout or any other fetch error, stick to the original model.
|
||||
return currentConfiguredModel;
|
||||
}
|
||||
// Disable Google API Model Check
|
||||
return currentConfiguredModel;
|
||||
}
|
||||
|
||||
@@ -52,6 +52,9 @@ interface OpenAIUsage {
|
||||
prompt_tokens: number;
|
||||
completion_tokens: number;
|
||||
total_tokens: number;
|
||||
prompt_tokens_details?: {
|
||||
cached_tokens?: number;
|
||||
};
|
||||
}
|
||||
|
||||
interface OpenAIChoice {
|
||||
@@ -515,6 +518,8 @@ export class OpenAIContentGenerator implements ContentGenerator {
|
||||
return new GenerateContentResponse();
|
||||
}
|
||||
|
||||
const lastResponse = responses[responses.length - 1];
|
||||
|
||||
// Find the last response with usage metadata
|
||||
const finalUsageMetadata = responses
|
||||
.slice()
|
||||
@@ -561,6 +566,8 @@ export class OpenAIContentGenerator implements ContentGenerator {
|
||||
safetyRatings: [],
|
||||
},
|
||||
];
|
||||
combinedResponse.responseId = lastResponse?.responseId;
|
||||
combinedResponse.createTime = lastResponse?.createTime;
|
||||
combinedResponse.modelVersion = this.model;
|
||||
combinedResponse.promptFeedback = { safetyRatings: [] };
|
||||
combinedResponse.usageMetadata = finalUsageMetadata;
|
||||
@@ -571,14 +578,26 @@ export class OpenAIContentGenerator implements ContentGenerator {
|
||||
async countTokens(
|
||||
request: CountTokensParameters,
|
||||
): Promise<CountTokensResponse> {
|
||||
// OpenAI doesn't have a direct token counting endpoint
|
||||
// We'll estimate based on the tiktoken library or a rough calculation
|
||||
// For now, return a rough estimate
|
||||
// Use tiktoken for accurate token counting
|
||||
const content = JSON.stringify(request.contents);
|
||||
const estimatedTokens = Math.ceil(content.length / 4); // Rough estimate: 1 token ≈ 4 characters
|
||||
let totalTokens = 0;
|
||||
|
||||
try {
|
||||
const { get_encoding } = await import('tiktoken');
|
||||
const encoding = get_encoding('cl100k_base'); // GPT-4 encoding, but estimate for qwen
|
||||
totalTokens = encoding.encode(content).length;
|
||||
encoding.free();
|
||||
} catch (error) {
|
||||
console.warn(
|
||||
'Failed to load tiktoken, falling back to character approximation:',
|
||||
error,
|
||||
);
|
||||
// Fallback: rough approximation using character count
|
||||
totalTokens = Math.ceil(content.length / 4); // Rough estimate: 1 token ≈ 4 characters
|
||||
}
|
||||
|
||||
return {
|
||||
totalTokens: estimatedTokens,
|
||||
totalTokens,
|
||||
};
|
||||
}
|
||||
|
||||
@@ -1128,6 +1147,9 @@ export class OpenAIContentGenerator implements ContentGenerator {
|
||||
}
|
||||
}
|
||||
|
||||
response.responseId = openaiResponse.id;
|
||||
response.createTime = openaiResponse.created.toString();
|
||||
|
||||
response.candidates = [
|
||||
{
|
||||
content: {
|
||||
@@ -1145,15 +1167,12 @@ export class OpenAIContentGenerator implements ContentGenerator {
|
||||
|
||||
// Add usage metadata if available
|
||||
if (openaiResponse.usage) {
|
||||
const usage = openaiResponse.usage as {
|
||||
prompt_tokens?: number;
|
||||
completion_tokens?: number;
|
||||
total_tokens?: number;
|
||||
};
|
||||
const usage = openaiResponse.usage as OpenAIUsage;
|
||||
|
||||
const promptTokens = usage.prompt_tokens || 0;
|
||||
const completionTokens = usage.completion_tokens || 0;
|
||||
const totalTokens = usage.total_tokens || 0;
|
||||
const cachedTokens = usage.prompt_tokens_details?.cached_tokens || 0;
|
||||
|
||||
// If we only have total tokens but no breakdown, estimate the split
|
||||
// Typically input is ~70% and output is ~30% for most conversations
|
||||
@@ -1170,6 +1189,7 @@ export class OpenAIContentGenerator implements ContentGenerator {
|
||||
promptTokenCount: finalPromptTokens,
|
||||
candidatesTokenCount: finalCompletionTokens,
|
||||
totalTokenCount: totalTokens,
|
||||
cachedContentTokenCount: cachedTokens,
|
||||
};
|
||||
}
|
||||
|
||||
@@ -1263,20 +1283,20 @@ export class OpenAIContentGenerator implements ContentGenerator {
|
||||
response.candidates = [];
|
||||
}
|
||||
|
||||
response.responseId = chunk.id;
|
||||
response.createTime = chunk.created.toString();
|
||||
|
||||
response.modelVersion = this.model;
|
||||
response.promptFeedback = { safetyRatings: [] };
|
||||
|
||||
// Add usage metadata if available in the chunk
|
||||
if (chunk.usage) {
|
||||
const usage = chunk.usage as {
|
||||
prompt_tokens?: number;
|
||||
completion_tokens?: number;
|
||||
total_tokens?: number;
|
||||
};
|
||||
const usage = chunk.usage as OpenAIUsage;
|
||||
|
||||
const promptTokens = usage.prompt_tokens || 0;
|
||||
const completionTokens = usage.completion_tokens || 0;
|
||||
const totalTokens = usage.total_tokens || 0;
|
||||
const cachedTokens = usage.prompt_tokens_details?.cached_tokens || 0;
|
||||
|
||||
// If we only have total tokens but no breakdown, estimate the split
|
||||
// Typically input is ~70% and output is ~30% for most conversations
|
||||
@@ -1293,6 +1313,7 @@ export class OpenAIContentGenerator implements ContentGenerator {
|
||||
promptTokenCount: finalPromptTokens,
|
||||
candidatesTokenCount: finalCompletionTokens,
|
||||
totalTokenCount: totalTokens,
|
||||
cachedContentTokenCount: cachedTokens,
|
||||
};
|
||||
}
|
||||
|
||||
@@ -1727,9 +1748,11 @@ export class OpenAIContentGenerator implements ContentGenerator {
|
||||
}
|
||||
|
||||
const openaiResponse: OpenAIResponseFormat = {
|
||||
id: `chatcmpl-${Date.now()}`,
|
||||
id: response.responseId || `chatcmpl-${Date.now()}`,
|
||||
object: 'chat.completion',
|
||||
created: Math.floor(Date.now() / 1000),
|
||||
created: response.createTime
|
||||
? Number(response.createTime)
|
||||
: Math.floor(Date.now() / 1000),
|
||||
model: this.model,
|
||||
choices: [choice],
|
||||
};
|
||||
@@ -1741,6 +1764,12 @@ export class OpenAIContentGenerator implements ContentGenerator {
|
||||
completion_tokens: response.usageMetadata.candidatesTokenCount || 0,
|
||||
total_tokens: response.usageMetadata.totalTokenCount || 0,
|
||||
};
|
||||
|
||||
if (response.usageMetadata.cachedContentTokenCount) {
|
||||
openaiResponse.usage.prompt_tokens_details = {
|
||||
cached_tokens: response.usageMetadata.cachedContentTokenCount,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
return openaiResponse;
|
||||
|
||||
@@ -32,7 +32,7 @@ describe('Core System Prompt (prompts.ts)', () => {
|
||||
vi.stubEnv('SANDBOX', undefined);
|
||||
const prompt = getCoreSystemPrompt();
|
||||
expect(prompt).not.toContain('---\n\n'); // Separator should not be present
|
||||
expect(prompt).toContain('You are an interactive CLI agent'); // Check for core content
|
||||
expect(prompt).toContain('You are Qwen Code, an interactive CLI agent'); // Check for core content
|
||||
expect(prompt).toMatchSnapshot(); // Use snapshot for base prompt structure
|
||||
});
|
||||
|
||||
@@ -40,7 +40,7 @@ describe('Core System Prompt (prompts.ts)', () => {
|
||||
vi.stubEnv('SANDBOX', undefined);
|
||||
const prompt = getCoreSystemPrompt('');
|
||||
expect(prompt).not.toContain('---\n\n');
|
||||
expect(prompt).toContain('You are an interactive CLI agent');
|
||||
expect(prompt).toContain('You are Qwen Code, an interactive CLI agent');
|
||||
expect(prompt).toMatchSnapshot();
|
||||
});
|
||||
|
||||
@@ -48,7 +48,7 @@ describe('Core System Prompt (prompts.ts)', () => {
|
||||
vi.stubEnv('SANDBOX', undefined);
|
||||
const prompt = getCoreSystemPrompt(' \n \t ');
|
||||
expect(prompt).not.toContain('---\n\n');
|
||||
expect(prompt).toContain('You are an interactive CLI agent');
|
||||
expect(prompt).toContain('You are Qwen Code, an interactive CLI agent');
|
||||
expect(prompt).toMatchSnapshot();
|
||||
});
|
||||
|
||||
@@ -59,7 +59,7 @@ describe('Core System Prompt (prompts.ts)', () => {
|
||||
const prompt = getCoreSystemPrompt(memory);
|
||||
|
||||
expect(prompt.endsWith(expectedSuffix)).toBe(true);
|
||||
expect(prompt).toContain('You are an interactive CLI agent'); // Ensure base prompt follows
|
||||
expect(prompt).toContain('You are Qwen Code, an interactive CLI agent'); // Ensure base prompt follows
|
||||
expect(prompt).toMatchSnapshot(); // Snapshot the combined prompt
|
||||
});
|
||||
|
||||
|
||||
@@ -6,7 +6,6 @@
|
||||
|
||||
import path from 'node:path';
|
||||
import fs from 'node:fs';
|
||||
import { LSTool } from '../tools/ls.js';
|
||||
import { EditTool } from '../tools/edit.js';
|
||||
import { GlobTool } from '../tools/glob.js';
|
||||
import { GrepTool } from '../tools/grep.js';
|
||||
@@ -17,6 +16,7 @@ import { WriteFileTool } from '../tools/write-file.js';
|
||||
import process from 'node:process';
|
||||
import { isGitRepository } from '../utils/gitUtils.js';
|
||||
import { MemoryTool, GEMINI_CONFIG_DIR } from '../tools/memoryTool.js';
|
||||
import { DEFAULT_GEMINI_MODEL } from '../config/models.js';
|
||||
|
||||
export interface ModelTemplateMapping {
|
||||
baseUrls?: string[];
|
||||
@@ -65,7 +65,7 @@ export function getCoreSystemPrompt(
|
||||
|
||||
// Check for system prompt mappings from global config
|
||||
if (config?.systemPromptMappings) {
|
||||
const currentModel = process.env.OPENAI_MODEL || '';
|
||||
const currentModel = process.env.OPENAI_MODEL || DEFAULT_GEMINI_MODEL;
|
||||
const currentBaseUrl = process.env.OPENAI_BASE_URL || '';
|
||||
|
||||
const matchedMapping = config.systemPromptMappings.find((mapping) => {
|
||||
@@ -111,7 +111,7 @@ export function getCoreSystemPrompt(
|
||||
const basePrompt = systemMdEnabled
|
||||
? fs.readFileSync(systemMdPath, 'utf8')
|
||||
: `
|
||||
You are an interactive CLI agent specializing in software engineering tasks. Your primary goal is to help users safely and efficiently, adhering strictly to the following instructions and utilizing your available tools.
|
||||
You are Qwen Code, an interactive CLI agent developed by Alibaba Group, specializing in software engineering tasks. Your primary goal is to help users safely and efficiently, adhering strictly to the following instructions and utilizing your available tools.
|
||||
|
||||
# Core Mandates
|
||||
|
||||
@@ -241,24 +241,56 @@ model: true
|
||||
|
||||
<example>
|
||||
user: list files here.
|
||||
model: [tool_call: ${LSTool.Name} for path '/path/to/project']
|
||||
model:
|
||||
<tool_call>
|
||||
<function=list_directory>
|
||||
<parameter=path>
|
||||
.
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: start the server implemented in server.js
|
||||
model: [tool_call: ${ShellTool.Name} for 'node server.js &' because it must run in the background]
|
||||
model:
|
||||
<tool_call>
|
||||
<function=run_shell_command>
|
||||
<parameter=command>
|
||||
node server.js &
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Refactor the auth logic in src/auth.py to use the requests library instead of urllib.
|
||||
model: Okay, I can refactor 'src/auth.py'.
|
||||
First, I'll analyze the code and check for a test safety net before planning any changes.
|
||||
[tool_call: ${GlobTool.Name} for path 'tests/test_auth.py']
|
||||
[tool_call: ${ReadFileTool.Name} for absolute_path '/path/to/tests/test_auth.py']
|
||||
<tool_call>
|
||||
<function=glob>
|
||||
<parameter=path>
|
||||
tests/test_auth.py
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
<tool_call>
|
||||
<function=read_file>
|
||||
<parameter=path>
|
||||
/path/to/tests/test_auth.py
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(After analysis)
|
||||
Great, 'tests/test_auth.py' exists and covers the core authentication logic. With this safety net in place, I can safely plan the refactoring.
|
||||
I'll also confirm 'requests' is a dependency.
|
||||
[tool_call: ${ReadFileTool.Name} for absolute_path '/path/to/requirements.txt']
|
||||
<tool_call>
|
||||
<function=read_file>
|
||||
<parameter=path>
|
||||
/path/to/requirements.txt
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(After analysis)
|
||||
Looks good, 'requests' is available.
|
||||
|
||||
@@ -268,20 +300,30 @@ Here's the plan:
|
||||
3. Remove the old 'urllib' import.
|
||||
4. Run the project's linter and tests to verify the changes.
|
||||
|
||||
Should I proceed?
|
||||
user: Yes
|
||||
model:
|
||||
[tool_call: ${WriteFileTool.Name} or ${EditTool.Name} to apply the refactoring to 'src/auth.py']
|
||||
<tool_call>
|
||||
<function=replace>
|
||||
<parameter=path>
|
||||
src/auth.py
|
||||
</parameter>
|
||||
<parameter=old_content>
|
||||
(old code content)
|
||||
</parameter>
|
||||
<parameter=new_content>
|
||||
(new code content)
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
Refactoring complete. Running verification...
|
||||
[tool_call: ${ShellTool.Name} for 'ruff check src/auth.py && pytest']
|
||||
<tool_call>
|
||||
<function=run_shell_command
|
||||
<parameter=command>
|
||||
ruff check src/auth.py && pytest
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(After verification passes)
|
||||
All checks passed. This is a stable checkpoint.
|
||||
${(function () {
|
||||
if (isGitRepository(process.cwd())) {
|
||||
return `Would you like me to write a commit message and commit these changes?`;
|
||||
}
|
||||
return '';
|
||||
})()}
|
||||
Would you like me to write a commit message and commit these changes?
|
||||
</example>
|
||||
|
||||
<example>
|
||||
@@ -292,32 +334,48 @@ model: I can run \`rm -rf /path/to/project/temp\`. This will permanently delete
|
||||
<example>
|
||||
user: Write tests for someFile.ts
|
||||
model:
|
||||
Okay, I can write those tests. First, I'll read \`someFile.ts\` to understand its functionality.
|
||||
[tool_call: ${ReadFileTool.Name} for absolute_path '/path/to/someFile.ts' or use ${GlobTool.Name} to find \`someFile.ts\` if its location is unknown]
|
||||
Okay, I can write those tests. First, I'll read someFile.ts to understand its functionality.
|
||||
<tool_call>
|
||||
<function=read_file>
|
||||
<parameter=path>
|
||||
/path/to/someFile.ts
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
Now I'll look for existing or related test files to understand current testing conventions and dependencies.
|
||||
[tool_call: ${ReadManyFilesTool.Name} for paths ['**/*.test.ts', 'src/**/*.spec.ts'] assuming someFile.ts is in the src directory]
|
||||
<tool_call>
|
||||
<function>read_many_files for paths ['**/*.test.ts', 'src/**/*.spec.ts'] assuming someFile.ts is in the src directory]
|
||||
</tool_call>
|
||||
(After reviewing existing tests and the file content)
|
||||
[tool_call: ${WriteFileTool.Name} to create /path/to/someFile.test.ts with the test code]
|
||||
<tool_call>
|
||||
<function=write_file>
|
||||
<parameter=path>
|
||||
/path/to/someFile.test.ts
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
I've written the tests. Now I'll run the project's test command to verify them.
|
||||
[tool_call: ${ShellTool.Name} for 'npm run test']
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: How do I update the user's profile information in this system?
|
||||
model:
|
||||
I'm not immediately sure how user profile information is updated. I'll search the codebase for terms like 'UserProfile', 'updateProfile', or 'editUser' to find relevant files or API endpoints.
|
||||
[tool_call: ${GrepTool.Name} for pattern 'UserProfile|updateProfile|editUser']
|
||||
(After reviewing search results, assuming a relevant file like '/path/to/UserProfileService.java' was found)
|
||||
Okay, \`/path/to/UserProfileService.java\` seems like the most relevant file. I'll read its content to understand how updates are handled.
|
||||
[tool_call: ${ReadFileTool.Name} for absolute_path '/path/to/UserProfileService.java']
|
||||
(After reading the file)
|
||||
It appears the \`updateUserProfile\` method in \`UserProfileService.java\` is responsible for this. It expects a user ID and a \`UserProfileDTO\` object...
|
||||
<tool_call>
|
||||
<function=run_shell_command>
|
||||
<parameter=command>
|
||||
npm run test
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(After verification passes)
|
||||
All checks passed. This is a stable checkpoint.
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Where are all the 'app.config' files in this project? I need to check their settings.
|
||||
model:
|
||||
[tool_call: ${GlobTool.Name} for pattern '**/app.config']
|
||||
<tool_call>
|
||||
<function=glob>
|
||||
<parameter=pattern>
|
||||
./**/app.config
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(Assuming GlobTool returns a list of paths like ['/path/to/moduleA/app.config', '/path/to/moduleB/app.config'])
|
||||
I found the following 'app.config' files:
|
||||
- /path/to/moduleA/app.config
|
||||
|
||||
@@ -49,6 +49,7 @@ export enum GeminiEventType {
|
||||
ChatCompressed = 'chat_compressed',
|
||||
Thought = 'thought',
|
||||
MaxSessionTurns = 'max_session_turns',
|
||||
SessionTokenLimitExceeded = 'session_token_limit_exceeded',
|
||||
LoopDetected = 'loop_detected',
|
||||
}
|
||||
|
||||
@@ -61,6 +62,12 @@ export interface GeminiErrorEventValue {
|
||||
error: StructuredError;
|
||||
}
|
||||
|
||||
export interface SessionTokenLimitExceededValue {
|
||||
currentTokens: number;
|
||||
limit: number;
|
||||
message: string;
|
||||
}
|
||||
|
||||
export interface ToolCallRequestInfo {
|
||||
callId: string;
|
||||
name: string;
|
||||
@@ -134,6 +141,11 @@ export type ServerGeminiMaxSessionTurnsEvent = {
|
||||
type: GeminiEventType.MaxSessionTurns;
|
||||
};
|
||||
|
||||
export type ServerGeminiSessionTokenLimitExceededEvent = {
|
||||
type: GeminiEventType.SessionTokenLimitExceeded;
|
||||
value: SessionTokenLimitExceededValue;
|
||||
};
|
||||
|
||||
export type ServerGeminiLoopDetectedEvent = {
|
||||
type: GeminiEventType.LoopDetected;
|
||||
};
|
||||
@@ -149,6 +161,7 @@ export type ServerGeminiStreamEvent =
|
||||
| ServerGeminiChatCompressedEvent
|
||||
| ServerGeminiThoughtEvent
|
||||
| ServerGeminiMaxSessionTurnsEvent
|
||||
| ServerGeminiSessionTokenLimitExceededEvent
|
||||
| ServerGeminiLoopDetectedEvent;
|
||||
|
||||
// A turn manages the agentic loop turn within the server context.
|
||||
|
||||
@@ -54,13 +54,9 @@ export class ClearcutLogger {
|
||||
this.config = config;
|
||||
}
|
||||
|
||||
static getInstance(config?: Config): ClearcutLogger | undefined {
|
||||
if (config === undefined || !config?.getUsageStatisticsEnabled())
|
||||
return undefined;
|
||||
if (!ClearcutLogger.instance) {
|
||||
ClearcutLogger.instance = new ClearcutLogger(config);
|
||||
}
|
||||
return ClearcutLogger.instance;
|
||||
static getInstance(_config?: Config): ClearcutLogger | undefined {
|
||||
// Disable Clearcut Logger,to avoid network request
|
||||
return undefined;
|
||||
}
|
||||
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Clearcut expects this format.
|
||||
|
||||
@@ -57,6 +57,7 @@ describe('loggers', () => {
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks(); // 清除之前测试的 mock 调用
|
||||
vi.spyOn(sdk, 'isTelemetrySdkInitialized').mockReturnValue(true);
|
||||
vi.spyOn(logs, 'getLogger').mockReturnValue(mockLogger);
|
||||
vi.spyOn(uiTelemetry.uiTelemetryService, 'addEvent').mockImplementation(
|
||||
@@ -146,7 +147,7 @@ describe('loggers', () => {
|
||||
'event.name': EVENT_USER_PROMPT,
|
||||
'event.timestamp': '2025-01-01T00:00:00.000Z',
|
||||
prompt_length: 11,
|
||||
prompt: 'test-prompt',
|
||||
// 移除 prompt 字段,因为 shouldLogUserPrompts 现在返回 false
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
@@ -38,8 +38,7 @@ import { uiTelemetryService, UiEvent } from './uiTelemetry.js';
|
||||
import { ClearcutLogger } from './clearcut-logger/clearcut-logger.js';
|
||||
import { safeJsonStringify } from '../utils/safeJsonStringify.js';
|
||||
|
||||
const shouldLogUserPrompts = (config: Config): boolean =>
|
||||
config.getTelemetryLogPromptsEnabled();
|
||||
const shouldLogUserPrompts = (_config: Config): boolean => false; // 禁用用户提示日志
|
||||
|
||||
function getCommonAttributes(config: Config): LogAttributes {
|
||||
return {
|
||||
|
||||
@@ -115,7 +115,7 @@ describe('getFolderStructure', () => {
|
||||
it('should return basic folder structure', async () => {
|
||||
const structure = await getFolderStructure('/testroot/subfolderA');
|
||||
const expected = `
|
||||
Showing up to 200 items (files + folders).
|
||||
Showing up to 20 items (files + folders).
|
||||
|
||||
/testroot/subfolderA/
|
||||
├───fileA1.ts
|
||||
@@ -129,7 +129,7 @@ Showing up to 200 items (files + folders).
|
||||
it('should handle an empty folder', async () => {
|
||||
const structure = await getFolderStructure('/testroot/emptyFolder');
|
||||
const expected = `
|
||||
Showing up to 200 items (files + folders).
|
||||
Showing up to 20 items (files + folders).
|
||||
|
||||
/testroot/emptyFolder/
|
||||
`.trim();
|
||||
@@ -139,7 +139,7 @@ Showing up to 200 items (files + folders).
|
||||
it('should ignore folders specified in ignoredFolders (default)', async () => {
|
||||
const structure = await getFolderStructure('/testroot');
|
||||
const expected = `
|
||||
Showing up to 200 items (files + folders). Folders or files indicated with ... contain more items not shown, were ignored, or the display limit (200 items) was reached.
|
||||
Showing up to 20 items (files + folders). Folders or files indicated with ... contain more items not shown, were ignored, or the display limit (20 items) was reached.
|
||||
|
||||
/testroot/
|
||||
├───.hiddenfile
|
||||
@@ -160,7 +160,7 @@ Showing up to 200 items (files + folders). Folders or files indicated with ... c
|
||||
ignoredFolders: new Set(['subfolderA', 'node_modules']),
|
||||
});
|
||||
const expected = `
|
||||
Showing up to 200 items (files + folders). Folders or files indicated with ... contain more items not shown, were ignored, or the display limit (200 items) was reached.
|
||||
Showing up to 20 items (files + folders). Folders or files indicated with ... contain more items not shown, were ignored, or the display limit (20 items) was reached.
|
||||
|
||||
/testroot/
|
||||
├───.hiddenfile
|
||||
@@ -177,7 +177,7 @@ Showing up to 200 items (files + folders). Folders or files indicated with ... c
|
||||
fileIncludePattern: /\.ts$/,
|
||||
});
|
||||
const expected = `
|
||||
Showing up to 200 items (files + folders).
|
||||
Showing up to 20 items (files + folders).
|
||||
|
||||
/testroot/subfolderA/
|
||||
├───fileA1.ts
|
||||
|
||||
@@ -10,7 +10,7 @@ import * as path from 'path';
|
||||
import { getErrorMessage, isNodeError } from './errors.js';
|
||||
import { FileDiscoveryService } from '../services/fileDiscoveryService.js';
|
||||
|
||||
const MAX_ITEMS = 200;
|
||||
const MAX_ITEMS = 20;
|
||||
const TRUNCATION_INDICATOR = '...';
|
||||
const DEFAULT_IGNORED_FOLDERS = new Set(['node_modules', '.git', 'dist']);
|
||||
|
||||
@@ -18,7 +18,7 @@ const DEFAULT_IGNORED_FOLDERS = new Set(['node_modules', '.git', 'dist']);
|
||||
|
||||
/** Options for customizing folder structure retrieval. */
|
||||
interface FolderStructureOptions {
|
||||
/** Maximum number of files and folders combined to display. Defaults to 200. */
|
||||
/** Maximum number of files and folders combined to display. Defaults to 20. */
|
||||
maxItems?: number;
|
||||
/** Set of folder names to ignore completely. Case-sensitive. */
|
||||
ignoredFolders?: Set<string>;
|
||||
|
||||
@@ -77,23 +77,23 @@ if (!argv.s) {
|
||||
execSync('npm run build --workspaces', { stdio: 'inherit' });
|
||||
}
|
||||
|
||||
console.log('packing @google/gemini-cli ...');
|
||||
console.log('packing @qwen-code/qwen-code ...');
|
||||
const cliPackageDir = join('packages', 'cli');
|
||||
rmSync(join(cliPackageDir, 'dist', 'google-gemini-cli-*.tgz'), { force: true });
|
||||
rmSync(join(cliPackageDir, 'dist', 'qwen-code-*.tgz'), { force: true });
|
||||
execSync(
|
||||
`npm pack -w @google/gemini-cli --pack-destination ./packages/cli/dist`,
|
||||
`npm pack -w @qwen-code/qwen-code --pack-destination ./packages/cli/dist`,
|
||||
{
|
||||
stdio: 'ignore',
|
||||
},
|
||||
);
|
||||
|
||||
console.log('packing @google/gemini-cli-core ...');
|
||||
console.log('packing @qwen-code/qwen-code-core ...');
|
||||
const corePackageDir = join('packages', 'core');
|
||||
rmSync(join(corePackageDir, 'dist', 'google-gemini-cli-core-*.tgz'), {
|
||||
rmSync(join(corePackageDir, 'dist', 'qwen-code-core-*.tgz'), {
|
||||
force: true,
|
||||
});
|
||||
execSync(
|
||||
`npm pack -w @google/gemini-cli-core --pack-destination ./packages/core/dist`,
|
||||
`npm pack -w @qwen-code/qwen-code-core --pack-destination ./packages/core/dist`,
|
||||
{ stdio: 'ignore' },
|
||||
);
|
||||
|
||||
@@ -102,11 +102,15 @@ const packageVersion = JSON.parse(
|
||||
).version;
|
||||
|
||||
chmodSync(
|
||||
join(cliPackageDir, 'dist', `google-gemini-cli-${packageVersion}.tgz`),
|
||||
join(cliPackageDir, 'dist', `qwen-code-qwen-code-${packageVersion}.tgz`),
|
||||
0o755,
|
||||
);
|
||||
chmodSync(
|
||||
join(corePackageDir, 'dist', `google-gemini-cli-core-${packageVersion}.tgz`),
|
||||
join(
|
||||
corePackageDir,
|
||||
'dist',
|
||||
`qwen-code-qwen-code-core-${packageVersion}.tgz`,
|
||||
),
|
||||
0o755,
|
||||
);
|
||||
|
||||
@@ -134,14 +138,21 @@ function buildImage(imageName, dockerfile) {
|
||||
{ stdio: buildStdout, shell: '/bin/bash' },
|
||||
);
|
||||
console.log(`built ${finalImageName}`);
|
||||
if (existsSync('/workspace/final_image_uri.txt')) {
|
||||
// The publish step only supports one image. If we build multiple, only the last one
|
||||
// will be published. Throw an error to make this failure explicit.
|
||||
throw new Error(
|
||||
'CI artifact file /workspace/final_image_uri.txt already exists. Refusing to overwrite.',
|
||||
|
||||
// If an output file path was provided via command-line, write the final image URI to it.
|
||||
if (argv.outputFile) {
|
||||
console.log(
|
||||
`Writing final image URI for CI artifact to: ${argv.outputFile}`,
|
||||
);
|
||||
// The publish step only supports one image. If we build multiple, only the last one
|
||||
// will be published. Throw an error to make this failure explicit if the file already exists.
|
||||
if (existsSync(argv.outputFile)) {
|
||||
throw new Error(
|
||||
`CI artifact file ${argv.outputFile} already exists. Refusing to overwrite.`,
|
||||
);
|
||||
}
|
||||
writeFileSync(argv.outputFile, finalImageName);
|
||||
}
|
||||
writeFileSync('/workspace/final_image_uri.txt', finalImageName);
|
||||
}
|
||||
|
||||
if (baseImage && baseDockerfile) {
|
||||
|
||||
Reference in New Issue
Block a user