docs: update all documentation to use Qwen Code branding

This commit is contained in:
tanzhenxin
2025-08-20 15:16:45 +08:00
parent c8f3b15971
commit 8caa0542c4
30 changed files with 340 additions and 339 deletions

View File

@@ -1,6 +1,6 @@
# Gemini CLI Core
# Qwen Code Core
Gemini CLI's core package (`packages/core`) is the backend portion of Gemini CLI, handling communication with the Gemini API, managing tools, and processing requests sent from `packages/cli`. For a general overview of Gemini CLI, see the [main documentation page](../index.md).
Qwen Code's core package (`packages/core`) is the backend portion of Qwen Code, handling communication with model APIs, managing tools, and processing requests sent from `packages/cli`. For a general overview of Qwen Code, see the [main documentation page](../index.md).
## Navigating this section
@@ -9,15 +9,15 @@ Gemini CLI's core package (`packages/core`) is the backend portion of Gemini CLI
## Role of the core
While the `packages/cli` portion of Gemini CLI provides the user interface, `packages/core` is responsible for:
While the `packages/cli` portion of Qwen Code provides the user interface, `packages/core` is responsible for:
- **Gemini API interaction:** Securely communicating with the Google Gemini API, sending user prompts, and receiving model responses.
- **Model API interaction:** Securely communicating with the configured model provider, sending user prompts, and receiving model responses.
- **Prompt engineering:** Constructing effective prompts for the model, potentially incorporating conversation history, tool definitions, and instructional context from context files (e.g., `QWEN.md`).
- **Tool management & orchestration:**
- Registering available tools (e.g., file system tools, shell command execution).
- Interpreting tool use requests from the Gemini model.
- Interpreting tool use requests from the model.
- Executing the requested tools with the provided arguments.
- Returning tool execution results to the Gemini model for further processing.
- Returning tool execution results to the model for further processing.
- **Session and state management:** Keeping track of the conversation state, including history and any relevant context required for coherent interactions.
- **Configuration:** Managing core-specific configurations, such as API key access, model selection, and tool settings.
@@ -25,20 +25,20 @@ While the `packages/cli` portion of Gemini CLI provides the user interface, `pac
The core plays a vital role in security:
- **API key management:** It handles the `GEMINI_API_KEY` and ensures it's used securely when communicating with the Gemini API.
- **API key management:** It handles provider credentials and ensures they're used securely when communicating with APIs.
- **Tool execution:** When tools interact with the local system (e.g., `run_shell_command`), the core (and its underlying tool implementations) must do so with appropriate caution, often involving sandboxing mechanisms to prevent unintended modifications.
## Chat history compression
To ensure that long conversations don't exceed the token limits of the Gemini model, the core includes a chat history compression feature.
To ensure that long conversations don't exceed the token limits of the selected model, the core includes a chat history compression feature.
When a conversation approaches the token limit for the configured model, the core automatically compresses the conversation history before sending it to the model. This compression is designed to be lossless in terms of the information conveyed, but it reduces the overall number of tokens used.
You can find the token limits for each model in the [Google AI documentation](https://ai.google.dev/gemini-api/docs/models).
You can find token limits for each provider's models in their documentation.
## Model fallback
Gemini CLI includes a model fallback mechanism to ensure that you can continue to use the CLI even if the default "pro" model is rate-limited.
Qwen Code includes a model fallback mechanism to ensure that you can continue to use the CLI even if the default model is rate-limited.
If you are using the default "pro" model and the CLI detects that you are being rate-limited, it automatically switches to the "flash" model for the current session. This allows you to continue working without interruption.