mirror of
https://github.com/QwenLM/qwen-code.git
synced 2026-01-03 15:39:13 +00:00
Compare commits
1 Commits
fix/openai
...
release/v0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
81987d9fdc |
2
.gitignore
vendored
2
.gitignore
vendored
@@ -47,5 +47,3 @@ packages/vscode-ide-companion/*.vsix
|
||||
logs/
|
||||
# GHA credentials
|
||||
gha-creds-*.json
|
||||
|
||||
QWEN.md
|
||||
193
QWEN.md
Normal file
193
QWEN.md
Normal file
@@ -0,0 +1,193 @@
|
||||
## Building and running
|
||||
|
||||
Before submitting any changes, it is crucial to validate them by running the full preflight check. This command will build the repository, run all tests, check for type errors, and lint the code.
|
||||
|
||||
To run the full suite of checks, execute the following command:
|
||||
|
||||
```bash
|
||||
npm run preflight
|
||||
```
|
||||
|
||||
This single command ensures that your changes meet all the quality gates of the project. While you can run the individual steps (`build`, `test`, `typecheck`, `lint`) separately, it is highly recommended to use `npm run preflight` to ensure a comprehensive validation.
|
||||
|
||||
## Writing Tests
|
||||
|
||||
This project uses **Vitest** as its primary testing framework. When writing tests, aim to follow existing patterns. Key conventions include:
|
||||
|
||||
### Test Structure and Framework
|
||||
|
||||
- **Framework**: All tests are written using Vitest (`describe`, `it`, `expect`, `vi`).
|
||||
- **File Location**: Test files (`*.test.ts` for logic, `*.test.tsx` for React components) are co-located with the source files they test.
|
||||
- **Configuration**: Test environments are defined in `vitest.config.ts` files.
|
||||
- **Setup/Teardown**: Use `beforeEach` and `afterEach`. Commonly, `vi.resetAllMocks()` is called in `beforeEach` and `vi.restoreAllMocks()` in `afterEach`.
|
||||
|
||||
### Mocking (`vi` from Vitest)
|
||||
|
||||
- **ES Modules**: Mock with `vi.mock('module-name', async (importOriginal) => { ... })`. Use `importOriginal` for selective mocking.
|
||||
- _Example_: `vi.mock('os', async (importOriginal) => { const actual = await importOriginal(); return { ...actual, homedir: vi.fn() }; });`
|
||||
- **Mocking Order**: For critical dependencies (e.g., `os`, `fs`) that affect module-level constants, place `vi.mock` at the _very top_ of the test file, before other imports.
|
||||
- **Hoisting**: Use `const myMock = vi.hoisted(() => vi.fn());` if a mock function needs to be defined before its use in a `vi.mock` factory.
|
||||
- **Mock Functions**: Create with `vi.fn()`. Define behavior with `mockImplementation()`, `mockResolvedValue()`, or `mockRejectedValue()`.
|
||||
- **Spying**: Use `vi.spyOn(object, 'methodName')`. Restore spies with `mockRestore()` in `afterEach`.
|
||||
|
||||
### Commonly Mocked Modules
|
||||
|
||||
- **Node.js built-ins**: `fs`, `fs/promises`, `os` (especially `os.homedir()`), `path`, `child_process` (`execSync`, `spawn`).
|
||||
- **External SDKs**: `@google/genai`, `@modelcontextprotocol/sdk`.
|
||||
- **Internal Project Modules**: Dependencies from other project packages are often mocked.
|
||||
|
||||
### React Component Testing (CLI UI - Ink)
|
||||
|
||||
- Use `render()` from `ink-testing-library`.
|
||||
- Assert output with `lastFrame()`.
|
||||
- Wrap components in necessary `Context.Provider`s.
|
||||
- Mock custom React hooks and complex child components using `vi.mock()`.
|
||||
|
||||
### Asynchronous Testing
|
||||
|
||||
- Use `async/await`.
|
||||
- For timers, use `vi.useFakeTimers()`, `vi.advanceTimersByTimeAsync()`, `vi.runAllTimersAsync()`.
|
||||
- Test promise rejections with `await expect(promise).rejects.toThrow(...)`.
|
||||
|
||||
### General Guidance
|
||||
|
||||
- When adding tests, first examine existing tests to understand and conform to established conventions.
|
||||
- Pay close attention to the mocks at the top of existing test files; they reveal critical dependencies and how they are managed in a test environment.
|
||||
|
||||
## Git Repo
|
||||
|
||||
The main branch for this project is called "main"
|
||||
|
||||
## JavaScript/TypeScript
|
||||
|
||||
When contributing to this React, Node, and TypeScript codebase, please prioritize the use of plain JavaScript objects with accompanying TypeScript interface or type declarations over JavaScript class syntax. This approach offers significant advantages, especially concerning interoperability with React and overall code maintainability.
|
||||
|
||||
### Preferring Plain Objects over Classes
|
||||
|
||||
JavaScript classes, by their nature, are designed to encapsulate internal state and behavior. While this can be useful in some object-oriented paradigms, it often introduces unnecessary complexity and friction when working with React's component-based architecture. Here's why plain objects are preferred:
|
||||
|
||||
- Seamless React Integration: React components thrive on explicit props and state management. Classes' tendency to store internal state directly within instances can make prop and state propagation harder to reason about and maintain. Plain objects, on the other hand, are inherently immutable (when used thoughtfully) and can be easily passed as props, simplifying data flow and reducing unexpected side effects.
|
||||
|
||||
- Reduced Boilerplate and Increased Conciseness: Classes often promote the use of constructors, this binding, getters, setters, and other boilerplate that can unnecessarily bloat code. TypeScript interface and type declarations provide powerful static type checking without the runtime overhead or verbosity of class definitions. This allows for more succinct and readable code, aligning with JavaScript's strengths in functional programming.
|
||||
|
||||
- Enhanced Readability and Predictability: Plain objects, especially when their structure is clearly defined by TypeScript interfaces, are often easier to read and understand. Their properties are directly accessible, and there's no hidden internal state or complex inheritance chains to navigate. This predictability leads to fewer bugs and a more maintainable codebase.
|
||||
|
||||
- Simplified Immutability: While not strictly enforced, plain objects encourage an immutable approach to data. When you need to modify an object, you typically create a new one with the desired changes, rather than mutating the original. This pattern aligns perfectly with React's reconciliation process and helps prevent subtle bugs related to shared mutable state.
|
||||
|
||||
- Better Serialization and Deserialization: Plain JavaScript objects are naturally easy to serialize to JSON and deserialize back, which is a common requirement in web development (e.g., for API communication or local storage). Classes, with their methods and prototypes, can complicate this process.
|
||||
|
||||
### Embracing ES Module Syntax for Encapsulation
|
||||
|
||||
Rather than relying on Java-esque private or public class members, which can be verbose and sometimes limit flexibility, we strongly prefer leveraging ES module syntax (`import`/`export`) for encapsulating private and public APIs.
|
||||
|
||||
- Clearer Public API Definition: With ES modules, anything that is exported is part of the public API of that module, while anything not exported is inherently private to that module. This provides a very clear and explicit way to define what parts of your code are meant to be consumed by other modules.
|
||||
|
||||
- Enhanced Testability (Without Exposing Internals): By default, unexported functions or variables are not accessible from outside the module. This encourages you to test the public API of your modules, rather than their internal implementation details. If you find yourself needing to spy on or stub an unexported function for testing purposes, it's often a "code smell" indicating that the function might be a good candidate for extraction into its own separate, testable module with a well-defined public API. This promotes a more robust and maintainable testing strategy.
|
||||
|
||||
- Reduced Coupling: Explicitly defined module boundaries through import/export help reduce coupling between different parts of your codebase. This makes it easier to refactor, debug, and understand individual components in isolation.
|
||||
|
||||
### Avoiding `any` Types and Type Assertions; Preferring `unknown`
|
||||
|
||||
TypeScript's power lies in its ability to provide static type checking, catching potential errors before your code runs. To fully leverage this, it's crucial to avoid the `any` type and be judicious with type assertions.
|
||||
|
||||
- **The Dangers of `any`**: Using any effectively opts out of TypeScript's type checking for that particular variable or expression. While it might seem convenient in the short term, it introduces significant risks:
|
||||
- **Loss of Type Safety**: You lose all the benefits of type checking, making it easy to introduce runtime errors that TypeScript would otherwise have caught.
|
||||
- **Reduced Readability and Maintainability**: Code with `any` types is harder to understand and maintain, as the expected type of data is no longer explicitly defined.
|
||||
- **Masking Underlying Issues**: Often, the need for any indicates a deeper problem in the design of your code or the way you're interacting with external libraries. It's a sign that you might need to refine your types or refactor your code.
|
||||
|
||||
- **Preferring `unknown` over `any`**: When you absolutely cannot determine the type of a value at compile time, and you're tempted to reach for any, consider using unknown instead. unknown is a type-safe counterpart to any. While a variable of type unknown can hold any value, you must perform type narrowing (e.g., using typeof or instanceof checks, or a type assertion) before you can perform any operations on it. This forces you to handle the unknown type explicitly, preventing accidental runtime errors.
|
||||
|
||||
```ts
|
||||
function processValue(value: unknown) {
|
||||
if (typeof value === 'string') {
|
||||
// value is now safely a string
|
||||
console.log(value.toUpperCase());
|
||||
} else if (typeof value === 'number') {
|
||||
// value is now safely a number
|
||||
console.log(value * 2);
|
||||
}
|
||||
// Without narrowing, you cannot access properties or methods on 'value'
|
||||
// console.log(value.someProperty); // Error: Object is of type 'unknown'.
|
||||
}
|
||||
```
|
||||
|
||||
- **Type Assertions (`as Type`) - Use with Caution**: Type assertions tell the TypeScript compiler, "Trust me, I know what I'm doing; this is definitely of this type." While there are legitimate use cases (e.g., when dealing with external libraries that don't have perfect type definitions, or when you have more information than the compiler), they should be used sparingly and with extreme caution.
|
||||
- **Bypassing Type Checking**: Like `any`, type assertions bypass TypeScript's safety checks. If your assertion is incorrect, you introduce a runtime error that TypeScript would not have warned you about.
|
||||
- **Code Smell in Testing**: A common scenario where `any` or type assertions might be tempting is when trying to test "private" implementation details (e.g., spying on or stubbing an unexported function within a module). This is a strong indication of a "code smell" in your testing strategy and potentially your code structure. Instead of trying to force access to private internals, consider whether those internal details should be refactored into a separate module with a well-defined public API. This makes them inherently testable without compromising encapsulation.
|
||||
|
||||
### Type narrowing `switch` clauses
|
||||
|
||||
Use the `checkExhaustive` helper in the default clause of a switch statement.
|
||||
This will ensure that all of the possible options within the value or
|
||||
enumeration are used.
|
||||
|
||||
This helper method can be found in `packages/cli/src/utils/checks.ts`
|
||||
|
||||
### Embracing JavaScript's Array Operators
|
||||
|
||||
To further enhance code cleanliness and promote safe functional programming practices, leverage JavaScript's rich set of array operators as much as possible. Methods like `.map()`, `.filter()`, `.reduce()`, `.slice()`, `.sort()`, and others are incredibly powerful for transforming and manipulating data collections in an immutable and declarative way.
|
||||
|
||||
Using these operators:
|
||||
|
||||
- Promotes Immutability: Most array operators return new arrays, leaving the original array untouched. This functional approach helps prevent unintended side effects and makes your code more predictable.
|
||||
- Improves Readability: Chaining array operators often lead to more concise and expressive code than traditional for loops or imperative logic. The intent of the operation is clear at a glance.
|
||||
- Facilitates Functional Programming: These operators are cornerstones of functional programming, encouraging the creation of pure functions that take inputs and produce outputs without causing side effects. This paradigm is highly beneficial for writing robust and testable code that pairs well with React.
|
||||
|
||||
By consistently applying these principles, we can maintain a codebase that is not only efficient and performant but also a joy to work with, both now and in the future.
|
||||
|
||||
## React (mirrored and adjusted from [react-mcp-server](https://github.com/facebook/react/blob/4448b18760d867f9e009e810571e7a3b8930bb19/compiler/packages/react-mcp-server/src/index.ts#L376C1-L441C94))
|
||||
|
||||
### Role
|
||||
|
||||
You are a React assistant that helps users write more efficient and optimizable React code. You specialize in identifying patterns that enable React Compiler to automatically apply optimizations, reducing unnecessary re-renders and improving application performance.
|
||||
|
||||
### Follow these guidelines in all code you produce and suggest
|
||||
|
||||
Use functional components with Hooks: Do not generate class components or use old lifecycle methods. Manage state with useState or useReducer, and side effects with useEffect (or related Hooks). Always prefer functions and Hooks for any new component logic.
|
||||
|
||||
Keep components pure and side-effect-free during rendering: Do not produce code that performs side effects (like subscriptions, network requests, or modifying external variables) directly inside the component's function body. Such actions should be wrapped in useEffect or performed in event handlers. Ensure your render logic is a pure function of props and state.
|
||||
|
||||
Respect one-way data flow: Pass data down through props and avoid any global mutations. If two components need to share data, lift that state up to a common parent or use React Context, rather than trying to sync local state or use external variables.
|
||||
|
||||
Never mutate state directly: Always generate code that updates state immutably. For example, use spread syntax or other methods to create new objects/arrays when updating state. Do not use assignments like state.someValue = ... or array mutations like array.push() on state variables. Use the state setter (setState from useState, etc.) to update state.
|
||||
|
||||
Accurately use useEffect and other effect Hooks: whenever you think you could useEffect, think and reason harder to avoid it. useEffect is primarily only used for synchronization, for example synchronizing React with some external state. IMPORTANT - Don't setState (the 2nd value returned by useState) within a useEffect as that will degrade performance. When writing effects, include all necessary dependencies in the dependency array. Do not suppress ESLint rules or omit dependencies that the effect's code uses. Structure the effect callbacks to handle changing values properly (e.g., update subscriptions on prop changes, clean up on unmount or dependency change). If a piece of logic should only run in response to a user action (like a form submission or button click), put that logic in an event handler, not in a useEffect. Where possible, useEffects should return a cleanup function.
|
||||
|
||||
Follow the Rules of Hooks: Ensure that any Hooks (useState, useEffect, useContext, custom Hooks, etc.) are called unconditionally at the top level of React function components or other Hooks. Do not generate code that calls Hooks inside loops, conditional statements, or nested helper functions. Do not call Hooks in non-component functions or outside the React component rendering context.
|
||||
|
||||
Use refs only when necessary: Avoid using useRef unless the task genuinely requires it (such as focusing a control, managing an animation, or integrating with a non-React library). Do not use refs to store application state that should be reactive. If you do use refs, never write to or read from ref.current during the rendering of a component (except for initial setup like lazy initialization). Any ref usage should not affect the rendered output directly.
|
||||
|
||||
Prefer composition and small components: Break down UI into small, reusable components rather than writing large monolithic components. The code you generate should promote clarity and reusability by composing components together. Similarly, abstract repetitive logic into custom Hooks when appropriate to avoid duplicating code.
|
||||
|
||||
Optimize for concurrency: Assume React may render your components multiple times for scheduling purposes (especially in development with Strict Mode). Write code that remains correct even if the component function runs more than once. For instance, avoid side effects in the component body and use functional state updates (e.g., setCount(c => c + 1)) when updating state based on previous state to prevent race conditions. Always include cleanup functions in effects that subscribe to external resources. Don't write useEffects for "do this when this changes" side effects. This ensures your generated code will work with React's concurrent rendering features without issues.
|
||||
|
||||
Optimize to reduce network waterfalls - Use parallel data fetching wherever possible (e.g., start multiple requests at once rather than one after another). Leverage Suspense for data loading and keep requests co-located with the component that needs the data. In a server-centric approach, fetch related data together in a single request on the server side (using Server Components, for example) to reduce round trips. Also, consider using caching layers or global fetch management to avoid repeating identical requests.
|
||||
|
||||
Rely on React Compiler - useMemo, useCallback, and React.memo can be omitted if React Compiler is enabled. Avoid premature optimization with manual memoization. Instead, focus on writing clear, simple components with direct data flow and side-effect-free render functions. Let the React Compiler handle tree-shaking, inlining, and other performance enhancements to keep your code base simpler and more maintainable.
|
||||
|
||||
Design for a good user experience - Provide clear, minimal, and non-blocking UI states. When data is loading, show lightweight placeholders (e.g., skeleton screens) rather than intrusive spinners everywhere. Handle errors gracefully with a dedicated error boundary or a friendly inline message. Where possible, render partial data as it becomes available rather than making the user wait for everything. Suspense allows you to declare the loading states in your component tree in a natural way, preventing “flash” states and improving perceived performance.
|
||||
|
||||
### Process
|
||||
|
||||
1. Analyze the user's code for optimization opportunities:
|
||||
- Check for React anti-patterns that prevent compiler optimization
|
||||
- Look for component structure issues that limit compiler effectiveness
|
||||
- Think about each suggestion you are making and consult React docs for best practices
|
||||
|
||||
2. Provide actionable guidance:
|
||||
- Explain specific code changes with clear reasoning
|
||||
- Show before/after examples when suggesting changes
|
||||
- Only suggest changes that meaningfully improve optimization potential
|
||||
|
||||
### Optimization Guidelines
|
||||
|
||||
- State updates should be structured to enable granular updates
|
||||
- Side effects should be isolated and dependencies clearly defined
|
||||
|
||||
## Comments policy
|
||||
|
||||
Only write high-value comments if at all. Avoid talking to the user through comments.
|
||||
|
||||
## General style requirements
|
||||
|
||||
Use hyphens instead of underscores in flag names (e.g. `my-flag` instead of `my_flag`).
|
||||
162
README.gemini.md
Normal file
162
README.gemini.md
Normal file
@@ -0,0 +1,162 @@
|
||||
# Gemini CLI
|
||||
|
||||
[](https://github.com/google-gemini/gemini-cli/actions/workflows/ci.yml)
|
||||
|
||||

|
||||
|
||||
This repository contains the Gemini CLI, a command-line AI workflow tool that connects to your
|
||||
tools, understands your code and accelerates your workflows.
|
||||
|
||||
With the Gemini CLI you can:
|
||||
|
||||
- Query and edit large codebases in and beyond Gemini's 1M token context window.
|
||||
- Generate new apps from PDFs or sketches, using Gemini's multimodal capabilities.
|
||||
- Automate operational tasks, like querying pull requests or handling complex rebases.
|
||||
- Use tools and MCP servers to connect new capabilities, including [media generation with Imagen,
|
||||
Veo or Lyria](https://github.com/GoogleCloudPlatform/vertex-ai-creative-studio/tree/main/experiments/mcp-genmedia)
|
||||
- Ground your queries with the [Google Search](https://ai.google.dev/gemini-api/docs/grounding)
|
||||
tool, built in to Gemini.
|
||||
|
||||
## Quickstart
|
||||
|
||||
1. **Prerequisites:** Ensure you have [Node.js version 20](https://nodejs.org/en/download) or higher installed.
|
||||
2. **Run the CLI:** Execute the following command in your terminal:
|
||||
|
||||
```bash
|
||||
npx https://github.com/google-gemini/gemini-cli
|
||||
```
|
||||
|
||||
Or install it with:
|
||||
|
||||
```bash
|
||||
npm install -g @google/gemini-cli
|
||||
```
|
||||
|
||||
Then, run the CLI from anywhere:
|
||||
|
||||
```bash
|
||||
gemini
|
||||
```
|
||||
|
||||
3. **Pick a color theme**
|
||||
4. **Authenticate:** When prompted, sign in with your personal Google account. This will grant you up to 60 model requests per minute and 1,000 model requests per day using Gemini.
|
||||
|
||||
You are now ready to use the Gemini CLI!
|
||||
|
||||
### Use a Gemini API key:
|
||||
|
||||
The Gemini API provides a free tier with [100 requests per day](https://ai.google.dev/gemini-api/docs/rate-limits#free-tier) using Gemini 2.5 Pro, control over which model you use, and access to higher rate limits (with a paid plan):
|
||||
|
||||
1. Generate a key from [Google AI Studio](https://aistudio.google.com/apikey).
|
||||
2. Set it as an environment variable in your terminal. Replace `YOUR_API_KEY` with your generated key.
|
||||
|
||||
```bash
|
||||
export GEMINI_API_KEY="YOUR_API_KEY"
|
||||
```
|
||||
|
||||
3. (Optionally) Upgrade your Gemini API project to a paid plan on the API key page (will automatically unlock [Tier 1 rate limits](https://ai.google.dev/gemini-api/docs/rate-limits#tier-1))
|
||||
|
||||
### Use a Vertex AI API key:
|
||||
|
||||
The Vertex AI API provides a [free tier](https://cloud.google.com/vertex-ai/generative-ai/docs/start/express-mode/overview) using express mode for Gemini 2.5 Pro, control over which model you use, and access to higher rate limits with a billing account:
|
||||
|
||||
1. Generate a key from [Google Cloud](https://cloud.google.com/vertex-ai/generative-ai/docs/start/api-keys).
|
||||
2. Set it as an environment variable in your terminal. Replace `YOUR_API_KEY` with your generated key and set GOOGLE_GENAI_USE_VERTEXAI to true
|
||||
|
||||
```bash
|
||||
export GOOGLE_API_KEY="YOUR_API_KEY"
|
||||
export GOOGLE_GENAI_USE_VERTEXAI=true
|
||||
```
|
||||
|
||||
3. (Optionally) Add a billing account on your project to get access to [higher usage limits](https://cloud.google.com/vertex-ai/generative-ai/docs/quotas)
|
||||
|
||||
For other authentication methods, including Google Workspace accounts, see the [authentication](./docs/cli/authentication.md) guide.
|
||||
|
||||
## Examples
|
||||
|
||||
Once the CLI is running, you can start interacting with Gemini from your shell.
|
||||
|
||||
You can start a project from a new directory:
|
||||
|
||||
```sh
|
||||
cd new-project/
|
||||
gemini
|
||||
> Write me a Gemini Discord bot that answers questions using a FAQ.md file I will provide
|
||||
```
|
||||
|
||||
Or work with an existing project:
|
||||
|
||||
```sh
|
||||
git clone https://github.com/google-gemini/gemini-cli
|
||||
cd gemini-cli
|
||||
gemini
|
||||
> Give me a summary of all of the changes that went in yesterday
|
||||
```
|
||||
|
||||
### Next steps
|
||||
|
||||
- Learn how to [contribute to or build from the source](./CONTRIBUTING.md).
|
||||
- Explore the available **[CLI Commands](./docs/cli/commands.md)**.
|
||||
- If you encounter any issues, review the **[troubleshooting guide](./docs/troubleshooting.md)**.
|
||||
- For more comprehensive documentation, see the [full documentation](./docs/index.md).
|
||||
- Take a look at some [popular tasks](#popular-tasks) for more inspiration.
|
||||
- Check out our **[Official Roadmap](./ROADMAP.md)**
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
Head over to the [troubleshooting guide](docs/troubleshooting.md) if you're
|
||||
having issues.
|
||||
|
||||
## Popular tasks
|
||||
|
||||
### Explore a new codebase
|
||||
|
||||
Start by `cd`ing into an existing or newly-cloned repository and running `gemini`.
|
||||
|
||||
```text
|
||||
> Describe the main pieces of this system's architecture.
|
||||
```
|
||||
|
||||
```text
|
||||
> What security mechanisms are in place?
|
||||
```
|
||||
|
||||
### Work with your existing code
|
||||
|
||||
```text
|
||||
> Implement a first draft for GitHub issue #123.
|
||||
```
|
||||
|
||||
```text
|
||||
> Help me migrate this codebase to the latest version of Java. Start with a plan.
|
||||
```
|
||||
|
||||
### Automate your workflows
|
||||
|
||||
Use MCP servers to integrate your local system tools with your enterprise collaboration suite.
|
||||
|
||||
```text
|
||||
> Make me a slide deck showing the git history from the last 7 days, grouped by feature and team member.
|
||||
```
|
||||
|
||||
```text
|
||||
> Make a full-screen web app for a wall display to show our most interacted-with GitHub issues.
|
||||
```
|
||||
|
||||
### Interact with your system
|
||||
|
||||
```text
|
||||
> Convert all the images in this directory to png, and rename them to use dates from the exif data.
|
||||
```
|
||||
|
||||
```text
|
||||
> Organize my PDF invoices by month of expenditure.
|
||||
```
|
||||
|
||||
### Uninstall
|
||||
|
||||
Head over to the [Uninstall](docs/Uninstall.md) guide for uninstallation instructions.
|
||||
|
||||
## Terms of Service and Privacy Notice
|
||||
|
||||
For details on the terms of service and privacy notice applicable to your use of Gemini CLI, see the [Terms of Service and Privacy Notice](./docs/tos-privacy.md).
|
||||
63
ROADMAP.gemini.md
Normal file
63
ROADMAP.gemini.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Qwen CLI Roadmap
|
||||
|
||||
The [Official Gemini CLI Roadmap](https://github.com/orgs/google-gemini/projects/11/)
|
||||
|
||||
Gemini CLI is an open-source AI agent that brings the power of Gemini directly into your terminal. It provides lightweight access to Gemini, giving you the most direct path from your prompt to our model.
|
||||
|
||||
This document outlines our approach to the Gemini CLI roadmap. Here, you'll find our guiding principles and a breakdown of the key areas we are
|
||||
focused on for development. Our roadmap is not a static list but a dynamic set of priorities that are tracked live in our GitHub Issues.
|
||||
|
||||
As an [Apache 2.0 open source project](https://github.com/google-gemini/gemini-cli?tab=Apache-2.0-1-ov-file#readme), we appreciate and welcome [public contributions](https://github.com/google-gemini/gemini-cli/blob/main/CONTRIBUTING.md), and will give first priority to those contributions aligned with our roadmap. If you want to propose a new feature or change to our roadmap, please start by [opening an issue for discussion](https://github.com/google-gemini/gemini-cli/issues/new/choose).
|
||||
|
||||
## Disclaimer
|
||||
|
||||
This roadmap represents our current thinking and is for informational purposes only. It is not a commitment or a guarantee of future delivery. The development, release, and timing of any features are subject to change, and we may update the roadmap based on community discussions as well as when our priorities evolve.
|
||||
|
||||
## Guiding Principles
|
||||
|
||||
Our development is guided by the following principles:
|
||||
|
||||
- **Power & Simplicity:** Deliver access to state-of-the-art Gemini models with an intuitive and easy-to-use lightweight command-line interface.
|
||||
- **Extensibility:** An adaptable agent to help you with a variety of use cases and environments along with the ability to run these agents anywhere.
|
||||
- **Intelligent:** Gemini CLI should be reliably ranked among the best agentic tools as measured by benchmarks like SWE Bench, Terminal Bench, and CSAT.
|
||||
- **Free and Open Source:** Foster a thriving open source community where cost isn’t a barrier to personal use, and PRs get merged quickly. This means resolving and closing issues, pull requests, and discussion posts quickly.
|
||||
|
||||
## How the Roadmap Works
|
||||
|
||||
Our roadmap is managed directly through GitHub Issues. See our entry point Roadmap Issue [here](https://github.com/google-gemini/gemini-cli/issues/4191). This approach allows for transparency and gives you a direct way to learn more or get involved with any specific initiative. All our roadmap items will be tagged as Type:`Feature` and Label:`maintainer` for features we are actively working on, or Type:`Task` and Label:`maintainer` for a more detailed list of tasks.
|
||||
|
||||
Issues are organized to provide key information at a glance:
|
||||
|
||||
- **Target Quarter:** `Milestone` denotes the anticipated delivery timeline.
|
||||
- **Feature Area:** Labels such as `area/model` or `area/tooling` categorize the work.
|
||||
- **Issue Type:** _Workstream_ => _Epics_ => _Features_ => _Tasks|Bugs_
|
||||
|
||||
To see what we're working on, you can filter our issues by these dimensions. See all our items [here](https://github.com/orgs/google-gemini/projects/11/views/19)
|
||||
|
||||
## Focus Areas
|
||||
|
||||
To better organize our efforts, we categorize our work into several key feature areas. These labels are used on our GitHub Issues to help you filter and
|
||||
find initiatives that interest you.
|
||||
|
||||
- **Authentication:** Secure user access via API keys, Gemini Code Assist login, etc.
|
||||
- **Model:** Support new Gemini models, multi-modality, local execution, and performance tuning.
|
||||
- **User Experience:** Improve the CLI's usability, performance, interactive features, and documentation.
|
||||
- **Tooling:** Built-in tools and the MCP ecosystem.
|
||||
- **Core:** Core functionality of the CLI
|
||||
- **Extensibility:** Bringing Gemini CLI to other surfaces e.g. GitHub.
|
||||
- **Contribution:** Improve the contribution process via test automation and CI/CD pipeline enhancements.
|
||||
- **Platform:** Manage installation, OS support, and the underlying CLI framework.
|
||||
- **Quality:** Focus on testing, reliability, performance, and overall product quality.
|
||||
- **Background Agents:** Enable long-running, autonomous tasks and proactive assistance.
|
||||
- **Security and Privacy:** For all things related to security and privacy
|
||||
|
||||
## How to Contribute
|
||||
|
||||
Gemini CLI is an open-source project, and we welcome contributions from the community! Whether you're a developer, a designer, or just an enthusiastic user you can find our [Community Guidelines here](https://github.com/google-gemini/gemini-cli/blob/main/CONTRIBUTING.md) to learn how to get started. There are many ways to get involved:
|
||||
|
||||
- **Roadmap:** Please review and find areas in our [roadmap](https://github.com/google-gemini/gemini-cli/issues/4191) that you would like to contribute to. Contributions based on this will be easiest to integrate with.
|
||||
- **Report Bugs:** If you find an issue, please create a [bug](https://github.com/google-gemini/gemini-cli/issues/new?template=bug_report.yml) with as much detail as possible. If you believe it is a critical breaking issue preventing direct CLI usage, please tag it as `priority/p0`.
|
||||
- **Suggest Features:** Have a great idea? We'd love to hear it! Open a [feature request](https://github.com/google-gemini/gemini-cli/issues/new?template=feature_request.yml).
|
||||
- **Contribute Code:** Check out our [CONTRIBUTING.md](https://github.com/google-gemini/gemini-cli/blob/main/CONTRIBUTING.md) file for guidelines on how to submit pull requests. We have a list of "good first issues" for new contributors.
|
||||
- **Write Documentation:** Help us improve our documentation, tutorials, and examples.
|
||||
We are excited about the future of Gemini CLI and look forward to building it with you!
|
||||
@@ -346,22 +346,6 @@ If you are experiencing performance issues with file searching (e.g., with `@` c
|
||||
}
|
||||
```
|
||||
|
||||
- **`skipNextSpeakerCheck`** (boolean):
|
||||
- **Description:** Skips the next speaker check after text responses. When enabled, the system bypasses analyzing whether the AI should continue speaking.
|
||||
- **Default:** `false`
|
||||
- **Example:**
|
||||
```json
|
||||
"skipNextSpeakerCheck": true
|
||||
```
|
||||
|
||||
- **`skipLoopDetection`** (boolean):
|
||||
- **Description:** Disables all loop detection checks (streaming and LLM-based). Loop detection prevents infinite loops in AI responses but can generate false positives that interrupt legitimate workflows. Enable this option if you experience frequent false positive loop detection interruptions.
|
||||
- **Default:** `false`
|
||||
- **Example:**
|
||||
```json
|
||||
"skipLoopDetection": true
|
||||
```
|
||||
|
||||
### Example `settings.json`:
|
||||
|
||||
```json
|
||||
@@ -389,8 +373,6 @@ If you are experiencing performance issues with file searching (e.g., with `@` c
|
||||
"usageStatisticsEnabled": true,
|
||||
"hideTips": false,
|
||||
"hideBanner": false,
|
||||
"skipNextSpeakerCheck": false,
|
||||
"skipLoopDetection": false,
|
||||
"maxSessionTurns": 10,
|
||||
"summarizeToolOutput": {
|
||||
"run_shell_command": {
|
||||
|
||||
336
docs/cli/enterprise.md
Normal file
336
docs/cli/enterprise.md
Normal file
@@ -0,0 +1,336 @@
|
||||
# Gemini CLI for the Enterprise
|
||||
|
||||
This document outlines configuration patterns and best practices for deploying and managing Gemini CLI in an enterprise environment. By leveraging system-level settings, administrators can enforce security policies, manage tool access, and ensure a consistent experience for all users.
|
||||
|
||||
> **A Note on Security:** The patterns described in this document are intended to help administrators create a more controlled and secure environment for using Gemini CLI. However, they should not be considered a foolproof security boundary. A determined user with sufficient privileges on their local machine may still be able to circumvent these configurations. These measures are designed to prevent accidental misuse and enforce corporate policy in a managed environment, not to defend against a malicious actor with local administrative rights.
|
||||
|
||||
## Centralized Configuration: The System Settings File
|
||||
|
||||
The most powerful tools for enterprise administration are the system-wide settings files. These files allow you to define a baseline configuration (`system-defaults.json`) and a set of overrides (`settings.json`) that apply to all users on a machine. For a complete overview of configuration options, see the [Configuration documentation](./configuration.md).
|
||||
|
||||
Settings are merged from four files. The precedence order for single-value settings (like `theme`) is:
|
||||
|
||||
1. System Defaults (`system-defaults.json`)
|
||||
2. User Settings (`~/.gemini/settings.json`)
|
||||
3. Workspace Settings (`<project>/.gemini/settings.json`)
|
||||
4. System Overrides (`settings.json`)
|
||||
|
||||
This means the System Overrides file has the final say. For settings that are arrays (`includeDirectories`) or objects (`mcpServers`), the values are merged.
|
||||
|
||||
**Example of Merging and Precedence:**
|
||||
|
||||
Here is how settings from different levels are combined.
|
||||
|
||||
- **System Defaults `system-defaults.json`:**
|
||||
|
||||
```json
|
||||
{
|
||||
"theme": "default-corporate-theme",
|
||||
"includeDirectories": ["/etc/gemini-cli/common-context"]
|
||||
}
|
||||
```
|
||||
|
||||
- **User `settings.json` (`~/.gemini/settings.json`):**
|
||||
|
||||
```json
|
||||
{
|
||||
"theme": "user-preferred-dark-theme",
|
||||
"mcpServers": {
|
||||
"corp-server": {
|
||||
"command": "/usr/local/bin/corp-server-dev"
|
||||
},
|
||||
"user-tool": {
|
||||
"command": "npm start --prefix ~/tools/my-tool"
|
||||
}
|
||||
},
|
||||
"includeDirectories": ["~/gemini-context"]
|
||||
}
|
||||
```
|
||||
|
||||
- **Workspace `settings.json` (`<project>/.gemini/settings.json`):**
|
||||
|
||||
```json
|
||||
{
|
||||
"theme": "project-specific-light-theme",
|
||||
"mcpServers": {
|
||||
"project-tool": {
|
||||
"command": "npm start"
|
||||
}
|
||||
},
|
||||
"includeDirectories": ["./project-context"]
|
||||
}
|
||||
```
|
||||
|
||||
- **System Overrides `settings.json`:**
|
||||
```json
|
||||
{
|
||||
"theme": "system-enforced-theme",
|
||||
"mcpServers": {
|
||||
"corp-server": {
|
||||
"command": "/usr/local/bin/corp-server-prod"
|
||||
}
|
||||
},
|
||||
"includeDirectories": ["/etc/gemini-cli/global-context"]
|
||||
}
|
||||
```
|
||||
|
||||
This results in the following merged configuration:
|
||||
|
||||
- **Final Merged Configuration:**
|
||||
```json
|
||||
{
|
||||
"theme": "system-enforced-theme",
|
||||
"mcpServers": {
|
||||
"corp-server": {
|
||||
"command": "/usr/local/bin/corp-server-prod"
|
||||
},
|
||||
"user-tool": {
|
||||
"command": "npm start --prefix ~/tools/my-tool"
|
||||
},
|
||||
"project-tool": {
|
||||
"command": "npm start"
|
||||
}
|
||||
},
|
||||
"includeDirectories": [
|
||||
"/etc/gemini-cli/common-context",
|
||||
"~/gemini-context",
|
||||
"./project-context",
|
||||
"/etc/gemini-cli/global-context"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Why:**
|
||||
|
||||
- **`theme`**: The value from the system overrides (`system-enforced-theme`) is used, as it has the highest precedence.
|
||||
- **`mcpServers`**: The objects are merged. The `corp-server` definition from the system overrides takes precedence over the user's definition. The unique `user-tool` and `project-tool` are included.
|
||||
- **`includeDirectories`**: The arrays are concatenated in the order of System Defaults, User, Workspace, and then System Overrides.
|
||||
|
||||
- **Location**:
|
||||
- **Linux**: `/etc/gemini-cli/settings.json`
|
||||
- **Windows**: `C:\ProgramData\gemini-cli\settings.json`
|
||||
- **macOS**: `/Library/Application Support/GeminiCli/settings.json`
|
||||
- The path can be overridden using the `GEMINI_CLI_SYSTEM_SETTINGS_PATH` environment variable.
|
||||
- **Control**: This file should be managed by system administrators and protected with appropriate file permissions to prevent unauthorized modification by users.
|
||||
|
||||
By using the system settings file, you can enforce the security and configuration patterns described below.
|
||||
|
||||
## Restricting Tool Access
|
||||
|
||||
You can significantly enhance security by controlling which tools the Gemini model can use. This is achieved through the `coreTools` and `excludeTools` settings. For a list of available tools, see the [Tools documentation](../tools/index.md).
|
||||
|
||||
### Allowlisting with `coreTools`
|
||||
|
||||
The most secure approach is to explicitly add the tools and commands that users are permitted to execute to an allowlist. This prevents the use of any tool not on the approved list.
|
||||
|
||||
**Example:** Allow only safe, read-only file operations and listing files.
|
||||
|
||||
```json
|
||||
{
|
||||
"coreTools": ["ReadFileTool", "GlobTool", "ShellTool(ls)"]
|
||||
}
|
||||
```
|
||||
|
||||
### Blocklisting with `excludeTools`
|
||||
|
||||
Alternatively, you can add specific tools that are considered dangerous in your environment to a blocklist.
|
||||
|
||||
**Example:** Prevent the use of the shell tool for removing files.
|
||||
|
||||
```json
|
||||
{
|
||||
"excludeTools": ["ShellTool(rm -rf)"]
|
||||
}
|
||||
```
|
||||
|
||||
**Security Note:** Blocklisting with `excludeTools` is less secure than allowlisting with `coreTools`, as it relies on blocking known-bad commands, and clever users may find ways to bypass simple string-based blocks. **Allowlisting is the recommended approach.**
|
||||
|
||||
## Managing Custom Tools (MCP Servers)
|
||||
|
||||
If your organization uses custom tools via [Model-Context Protocol (MCP) servers](../core/tools-api.md), it is crucial to understand how server configurations are managed to apply security policies effectively.
|
||||
|
||||
### How MCP Server Configurations are Merged
|
||||
|
||||
Gemini CLI loads `settings.json` files from three levels: System, Workspace, and User. When it comes to the `mcpServers` object, these configurations are **merged**:
|
||||
|
||||
1. **Merging:** The lists of servers from all three levels are combined into a single list.
|
||||
2. **Precedence:** If a server with the **same name** is defined at multiple levels (e.g., a server named `corp-api` exists in both system and user settings), the definition from the highest-precedence level is used. The order of precedence is: **System > Workspace > User**.
|
||||
|
||||
This means a user **cannot** override the definition of a server that is already defined in the system-level settings. However, they **can** add new servers with unique names.
|
||||
|
||||
### Enforcing a Catalog of Tools
|
||||
|
||||
The security of your MCP tool ecosystem depends on a combination of defining the canonical servers and adding their names to an allowlist.
|
||||
|
||||
### Restricting Tools Within an MCP Server
|
||||
|
||||
For even greater security, especially when dealing with third-party MCP servers, you can restrict which specific tools from a server are exposed to the model. This is done using the `includeTools` and `excludeTools` properties within a server's definition. This allows you to use a subset of tools from a server without allowing potentially dangerous ones.
|
||||
|
||||
Following the principle of least privilege, it is highly recommended to use `includeTools` to create an allowlist of only the necessary tools.
|
||||
|
||||
**Example:** Only allow the `code-search` and `get-ticket-details` tools from a third-party MCP server, even if the server offers other tools like `delete-ticket`.
|
||||
|
||||
```json
|
||||
{
|
||||
"allowMCPServers": ["third-party-analyzer"],
|
||||
"mcpServers": {
|
||||
"third-party-analyzer": {
|
||||
"command": "/usr/local/bin/start-3p-analyzer.sh",
|
||||
"includeTools": ["code-search", "get-ticket-details"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### More Secure Pattern: Define and Add to Allowlist in System Settings
|
||||
|
||||
To create a secure, centrally-managed catalog of tools, the system administrator **must** do both of the following in the system-level `settings.json` file:
|
||||
|
||||
1. **Define the full configuration** for every approved server in the `mcpServers` object. This ensures that even if a user defines a server with the same name, the secure system-level definition will take precedence.
|
||||
2. **Add the names** of those servers to an allowlist using the `allowMCPServers` setting. This is a critical security step that prevents users from running any servers that are not on this list. If this setting is omitted, the CLI will merge and allow any server defined by the user.
|
||||
|
||||
**Example System `settings.json`:**
|
||||
|
||||
1. Add the _names_ of all approved servers to an allowlist.
|
||||
This will prevent users from adding their own servers.
|
||||
|
||||
2. Provide the canonical _definition_ for each server on the allowlist.
|
||||
|
||||
```json
|
||||
{
|
||||
"allowMCPServers": ["corp-data-api", "source-code-analyzer"],
|
||||
"mcpServers": {
|
||||
"corp-data-api": {
|
||||
"command": "/usr/local/bin/start-corp-api.sh",
|
||||
"timeout": 5000
|
||||
},
|
||||
"source-code-analyzer": {
|
||||
"command": "/usr/local/bin/start-analyzer.sh"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This pattern is more secure because it uses both definition and an allowlist. Any server a user defines will either be overridden by the system definition (if it has the same name) or blocked because its name is not in the `allowMCPServers` list.
|
||||
|
||||
### Less Secure Pattern: Omitting the Allowlist
|
||||
|
||||
If the administrator defines the `mcpServers` object but fails to also specify the `allowMCPServers` allowlist, users may add their own servers.
|
||||
|
||||
**Example System `settings.json`:**
|
||||
|
||||
This configuration defines servers but does not enforce the allowlist.
|
||||
The administrator has NOT included the "allowMCPServers" setting.
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"corp-data-api": {
|
||||
"command": "/usr/local/bin/start-corp-api.sh"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In this scenario, a user can add their own server in their local `settings.json`. Because there is no `allowMCPServers` list to filter the merged results, the user's server will be added to the list of available tools and allowed to run.
|
||||
|
||||
## Enforcing Sandboxing for Security
|
||||
|
||||
To mitigate the risk of potentially harmful operations, you can enforce the use of sandboxing for all tool execution. The sandbox isolates tool execution in a containerized environment.
|
||||
|
||||
**Example:** Force all tool execution to happen within a Docker sandbox.
|
||||
|
||||
```json
|
||||
{
|
||||
"sandbox": "docker"
|
||||
}
|
||||
```
|
||||
|
||||
You can also specify a custom, hardened Docker image for the sandbox using the `--sandbox-image` command-line argument or by building a custom `sandbox.Dockerfile` as described in the [Sandboxing documentation](./configuration.md#sandboxing).
|
||||
|
||||
## Controlling Network Access via Proxy
|
||||
|
||||
In corporate environments with strict network policies, you can configure Gemini CLI to route all outbound traffic through a corporate proxy. This can be set via an environment variable, but it can also be enforced for custom tools via the `mcpServers` configuration.
|
||||
|
||||
**Example (for an MCP Server):**
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"proxied-server": {
|
||||
"command": "node",
|
||||
"args": ["mcp_server.js"],
|
||||
"env": {
|
||||
"HTTP_PROXY": "http://proxy.example.com:8080",
|
||||
"HTTPS_PROXY": "http://proxy.example.com:8080"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Telemetry and Auditing
|
||||
|
||||
For auditing and monitoring purposes, you can configure Gemini CLI to send telemetry data to a central location. This allows you to track tool usage and other events. For more information, see the [telemetry documentation](../telemetry.md).
|
||||
|
||||
**Example:** Enable telemetry and send it to a local OTLP collector. If `otlpEndpoint` is not specified, it defaults to `http://localhost:4317`.
|
||||
|
||||
```json
|
||||
{
|
||||
"telemetry": {
|
||||
"enabled": true,
|
||||
"target": "gcp",
|
||||
"logPrompts": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** Ensure that `logPrompts` is set to `false` in an enterprise setting to avoid collecting potentially sensitive information from user prompts.
|
||||
|
||||
## Putting It All Together: Example System `settings.json`
|
||||
|
||||
Here is an example of a system `settings.json` file that combines several of the patterns discussed above to create a secure, controlled environment for Gemini CLI.
|
||||
|
||||
```json
|
||||
{
|
||||
"sandbox": "docker",
|
||||
|
||||
"coreTools": [
|
||||
"ReadFileTool",
|
||||
"GlobTool",
|
||||
"ShellTool(ls)",
|
||||
"ShellTool(cat)",
|
||||
"ShellTool(grep)"
|
||||
],
|
||||
|
||||
"mcpServers": {
|
||||
"corp-tools": {
|
||||
"command": "/opt/gemini-tools/start.sh",
|
||||
"timeout": 5000
|
||||
}
|
||||
},
|
||||
"allowMCPServers": ["corp-tools"],
|
||||
|
||||
"telemetry": {
|
||||
"enabled": true,
|
||||
"target": "gcp",
|
||||
"otlpEndpoint": "https://telemetry-prod.example.com:4317",
|
||||
"logPrompts": false
|
||||
},
|
||||
|
||||
"bugCommand": {
|
||||
"urlTemplate": "https://servicedesk.example.com/new-ticket?title={title}&details={info}"
|
||||
},
|
||||
|
||||
"usageStatisticsEnabled": false
|
||||
}
|
||||
```
|
||||
|
||||
This configuration:
|
||||
|
||||
- Forces all tool execution into a Docker sandbox.
|
||||
- Strictly uses an allowlist for a small set of safe shell commands and file tools.
|
||||
- Defines and allows a single corporate MCP server for custom tools.
|
||||
- Enables telemetry for auditing, without logging prompt content.
|
||||
- Redirects the `/bug` command to an internal ticketing system.
|
||||
- Disables general usage statistics collection.
|
||||
@@ -81,20 +81,20 @@ You can install extensions using the `install` command. This command allows you
|
||||
|
||||
### Usage
|
||||
|
||||
`qwen extensions install <source> | [options]`
|
||||
`gemini extensions install <source> | [options]`
|
||||
|
||||
### Options
|
||||
|
||||
- `source <url> positional argument`: The URL of a Git repository to install the extension from. The repository must contain a `qwen-extension.json` file in its root.
|
||||
- `--path <path>`: The path to a local directory to install as an extension. The directory must contain a `qwen-extension.json` file.
|
||||
- `source <url> positional argument`: The URL of a Git repository to install the extension from. The repository must contain a `gemini-extension.json` file in its root.
|
||||
- `--path <path>`: The path to a local directory to install as an extension. The directory must contain a `gemini-extension.json` file.
|
||||
|
||||
# Variables
|
||||
|
||||
Qwen Code extensions allow variable substitution in `qwen-extension.json`. This can be useful if e.g., you need the current directory to run an MCP server using `"cwd": "${extensionPath}${/}run.ts"`.
|
||||
Gemini CLI extensions allow variable substitution in `gemini-extension.json`. This can be useful if e.g., you need the current directory to run an MCP server using `"cwd": "${extensionPath}${/}run.ts"`.
|
||||
|
||||
**Supported variables:**
|
||||
|
||||
| variable | description |
|
||||
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `${extensionPath}` | The fully-qualified path of the extension in the user's filesystem e.g., '/Users/username/.qwen/extensions/example-extension'. This will not unwrap symlinks. |
|
||||
| `${/} or ${pathSeparator}` | The path separator (differs per OS). |
|
||||
| variable | description |
|
||||
| -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `${extensionPath}` | The fully-qualified path of the extension in the user's filesystem e.g., '/Users/username/.gemini/extensions/example-extension'. This will not unwrap symlinks. |
|
||||
| `${/} or ${pathSeparator}` | The path separator (differs per OS). |
|
||||
|
||||
@@ -45,7 +45,7 @@ You can also install the extension directly from a marketplace.
|
||||
- **For VS Code Forks:** To support forks of VS Code, the extension is also published on the [Open VSX Registry](https://open-vsx.org/extension/qwenlm/qwen-code-vscode-ide-companion). Follow your editor's instructions for installing extensions from this registry.
|
||||
|
||||
> NOTE:
|
||||
> The "Qwen Code Companion" extension may appear towards the bottom of search results. If you don't see it immediately, try scrolling down or sorting by "Newly Published".
|
||||
> The "Gemini CLI Companion" extension may appear towards the bottom of search results. If you don't see it immediately, try scrolling down or sorting by "Newly Published".
|
||||
>
|
||||
> After manually installing the extension, you must run `/ide enable` in the CLI to activate the integration.
|
||||
|
||||
@@ -80,7 +80,7 @@ If connected, this command will show the IDE it's connected to and a list of rec
|
||||
|
||||
### Working with Diffs
|
||||
|
||||
When you ask Qwen model to modify a file, it can open a diff view directly in your editor.
|
||||
When you ask Gemini to modify a file, it can open a diff view directly in your editor.
|
||||
|
||||
**To accept a diff**, you can perform any of the following actions:
|
||||
|
||||
@@ -139,6 +139,6 @@ If you encounter issues with IDE integration, here are some common error message
|
||||
- **Cause:** You are running Qwen Code in a terminal or environment that is not a supported IDE.
|
||||
- **Solution:** Run Qwen Code from the integrated terminal of a supported IDE, like VS Code.
|
||||
|
||||
- **Message:** `No installer is available for IDE. Please install the Qwen Code Companion extension manually from the marketplace.`
|
||||
- **Message:** `No installer is available for IDE. Please install the Gemini CLI Companion extension manually from the marketplace.`
|
||||
- **Cause:** You ran `/ide install`, but the CLI does not have an automated installer for your specific IDE.
|
||||
- **Solution:** Open your IDE's extension marketplace, search for "Qwen Code Companion", and install it manually.
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# Ignoring Files
|
||||
|
||||
This document provides an overview of the Qwen Ignore (`.qwenignore`) feature of Qwen Code.
|
||||
This document provides an overview of the Gemini Ignore (`.qwenignore`) feature of Qwen Code.
|
||||
|
||||
Qwen Code includes the ability to automatically ignore files, similar to `.gitignore` (used by Git). Adding paths to your `.qwenignore` file will exclude them from tools that support this feature, although they will still be visible to other services (such as Git).
|
||||
Qwen Code includes the ability to automatically ignore files, similar to `.gitignore` (used by Git) and `.aiexclude` (used by Gemini Code Assist). Adding paths to your `.qwenignore` file will exclude them from tools that support this feature, although they will still be visible to other services (such as Git).
|
||||
|
||||
## How it works
|
||||
|
||||
|
||||
343
docs/releases.md
Normal file
343
docs/releases.md
Normal file
@@ -0,0 +1,343 @@
|
||||
# Gemini CLI Releases
|
||||
|
||||
## Release Cadence and Tags
|
||||
|
||||
We will follow https://semver.org/ as closely as possible but will call out when or if we have to deviate from it. Our weekly releases will be minor version increments and any bug or hotfixes btween releases will go out as patch versions on the most recent release.
|
||||
|
||||
### Preview
|
||||
|
||||
New preview releases will be published each week at UTC 2359 on Tuesdays. These releases will not have been fully vetted and may contain regressions or other outstanding issues. Please help us test and install with `preview` tag.
|
||||
|
||||
```bash
|
||||
npm install -g @google/gemini-cli@preview
|
||||
```
|
||||
|
||||
### Stable
|
||||
|
||||
- New stable releases will be published each week at UTC 2000 on Tuesdays, this will be the full promotion of last week's release + any bug fixes and validations. Use `latest` tag.
|
||||
|
||||
```bash
|
||||
npm install -g @google/gemini-cli@latest
|
||||
```
|
||||
|
||||
### Nightly
|
||||
|
||||
- New releases will be published each week at UTC 0000 each day, This will be all changes from the main branch as represted at time of release. It should be assumed there are pending validations and issues. Use `nightly` tag.
|
||||
|
||||
```bash
|
||||
npm install -g @google/gemini-cli@nightly
|
||||
```
|
||||
|
||||
# Release Process.
|
||||
|
||||
Where `x.y.z` is the next version to be released. In most all cases for the weekly release this will be an increment on `y`, aka minor version update. Major version updates `x` will need broader coordination and communication. For patches `z` see below. When possible we will do our best to adher to https://semver.org/
|
||||
|
||||
Our release cadence is new releases are sent to a preview channel for a week and then promoted to stable after a week. Version numbers will follow SemVer with weekly releases incrementing the minor version. Patches and bug fixes to both preview and stable releases will increment the patch version.
|
||||
|
||||
## Nightly Release
|
||||
|
||||
Each night at UTC 0000 we will auto deploy a nightly release from `main`. This will be a version of the next production release, x.y.z, with the nightly tag.
|
||||
|
||||
## Create Preview Release
|
||||
|
||||
Each Tuesday at UTC 2359 we will auto deploy a preview release of the next production release x.y.z.
|
||||
|
||||
- This will happen as a scheduled instance of the ‘release’ action. It will be cut off of Main.
|
||||
- This will create a branch `release/vx.y.z-preview.n`
|
||||
- We will run evals and smoke testing against this branch and the npm package. For now this should be manual smoke testing, we don't have a dedicated matrix or specific detailed process. There is work coming soon to make this more formalized and automatic see https://github.com/google-gemini/gemini-cli/issues/3788
|
||||
- Users installing `@preview` will get this release as well
|
||||
|
||||
## Promote Stable Release
|
||||
|
||||
After one week (On the following Tuesday) with all signals a go, we will manually release at 2000 UTC via the current on-call person.
|
||||
|
||||
- The release action will be used with the source branch as `release/vx.y.z-preview.n`
|
||||
- The version will be x.y.z
|
||||
- The releaser will create and merge a pr into main with the version changes.
|
||||
- Smoke tests and manual validation will be run. For now this should be manual smoke testing, we don't have a dedicated matrix or specific detailed process. There is work coming soon to make this more formalized and automatic see https://github.com/google-gemini/gemini-cli/issues/3788
|
||||
|
||||
## Patching Releases
|
||||
|
||||
If a critical bug needs to be fixed before the next scheduled release, follow this process to create a patch.
|
||||
|
||||
### 1. Create a Hotfix Branch
|
||||
|
||||
First, create a new branch for your fix. The source for this branch depends on whether you are patching a stable or a preview release.
|
||||
|
||||
- **For a stable release patch:**
|
||||
Create a branch from the Git tag of the version you need to patch. Tag names are formatted as `vx.y.z`.
|
||||
|
||||
```bash
|
||||
# Example: Create a hotfix branch for v0.2.0
|
||||
git checkout v0.2.0 -b hotfix/issue-123-fix-for-v0.2.0
|
||||
```
|
||||
|
||||
- **For a preview release patch:**
|
||||
Create a branch from the existing preview release branch, which is formatted as `release/vx.y.z-preview.n`.
|
||||
|
||||
```bash
|
||||
# Example: Create a hotfix branch for a preview release
|
||||
git checkout release/v0.2.0-preview.0 && git checkout -b hotfix/issue-456-fix-for-preview
|
||||
```
|
||||
|
||||
### 2. Implement the Fix
|
||||
|
||||
In your new hotfix branch, either create a new commit with the fix or cherry-pick an existing commit from the `main` branch. Merge your changes into the source of the hotfix branch (ex. https://github.com/google-gemini/gemini-cli/pull/6850).
|
||||
|
||||
### 3. Perform the Release
|
||||
|
||||
Follow the manual release process using the "Release" GitHub Actions workflow.
|
||||
|
||||
- **Version**: For stable patches, increment the patch version (e.g., `v0.2.0` -> `v0.2.1`). For preview patches, increment the preview number (e.g., `v0.2.0-preview.0` -> `v0.2.0-preview.1`).
|
||||
- **Ref**: Use your source branch as the reference (ex. `release/v0.2.0-preview.0`)
|
||||
|
||||

|
||||
|
||||
### 4. Update Versions
|
||||
|
||||
After the hotfix is released, merge the changes back to the appropriate branch.
|
||||
|
||||
- **For a stable release hotfix:**
|
||||
Open a pull request to merge the release branch (e.g., `release/0.2.1`) back into `main`. This keeps the version number in `main` up to date.
|
||||
|
||||
- **For a preview release hotfix:**
|
||||
Open a pull request to merge the new preview release branch (e.g., `release/v0.2.0-preview.1`) back into the existing preview release branch (`release/v0.2.0-preview.0`) (ex. https://github.com/google-gemini/gemini-cli/pull/6868)
|
||||
|
||||
## Release Schedule
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td>Date
|
||||
</td>
|
||||
<td>Stable UTC 2000
|
||||
</td>
|
||||
<td>Preview UTC 2359
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Aug 19th, 2025
|
||||
</td>
|
||||
<td>N/A
|
||||
</td>
|
||||
<td>0.2.0-preview.0
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Aug 26th, 2025
|
||||
</td>
|
||||
<td>0.2.0
|
||||
</td>
|
||||
<td>0.3.0-preview.0
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Sep 2nd, 2025
|
||||
</td>
|
||||
<td>0.3.0
|
||||
</td>
|
||||
<td>0.4.0-preview.0
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Sep 9th, 2025
|
||||
</td>
|
||||
<td>0.4.0
|
||||
</td>
|
||||
<td>0.5.0-preview.0
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Sep 16th, 2025
|
||||
</td>
|
||||
<td>0.5.0
|
||||
</td>
|
||||
<td>0.6.0-preview.0
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Sep 23rd, 2025
|
||||
</td>
|
||||
<td>0.6.0
|
||||
</td>
|
||||
<td>0.7.0-preview.0
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
## How To Release
|
||||
|
||||
Releases are managed through the [release.yml](https://github.com/google-gemini/gemini-cli/actions/workflows/release.yml) GitHub Actions workflow. To perform a manual release for a patch or hotfix:
|
||||
|
||||
1. Navigate to the **Actions** tab of the repository.
|
||||
2. Select the **Release** workflow from the list.
|
||||
3. Click the **Run workflow** dropdown button.
|
||||
4. Fill in the required inputs:
|
||||
- **Version**: The exact version to release (e.g., `v0.2.1`).
|
||||
- **Ref**: The branch or commit SHA to release from (defaults to `main`).
|
||||
- **Dry Run**: Leave as `true` to test the workflow without publishing, or set to `false` to perform a live release.
|
||||
5. Click **Run workflow**.
|
||||
|
||||
### TLDR
|
||||
|
||||
Each release, wether automated or manual performs the following steps:
|
||||
|
||||
1. Checks out the latest code from the `main` branch.
|
||||
1. Installs all dependencies.
|
||||
1. Runs the full suite of `preflight` checks and integration tests.
|
||||
1. If all tests succeed, it calculates the next version number based on the inputs.
|
||||
1. It creates a branch name `release/${VERSION}`.
|
||||
1. It creates a tag name `v${VERSION}`.
|
||||
1. It then builds and publishes the packages to npm with the provided version number.
|
||||
1. Finally, it creates a GitHub Release for the version.
|
||||
|
||||
### Failure Handling
|
||||
|
||||
If any step in the workflow fails, it will automatically create a new issue in the repository with the labels `bug` and `release-failure`. The issue will contain a link to the failed workflow run for easy debugging.
|
||||
|
||||
### Docker
|
||||
|
||||
We also run a Google cloud build called [release-docker.yml](../.gcp/release-docker.yml). Which publishes the sandbox docker to match your release. This will also be moved to GH and combined with the main release file once service account permissions are sorted out.
|
||||
|
||||
## Release Validation
|
||||
|
||||
After pushing a new release smoke testing should be performed to ensure that the packages are working as expected. This can be done by installing the packages locally and running a set of tests to ensure that they are functioning correctly.
|
||||
|
||||
- `npx -y @google/gemini-cli@latest --version` to validate the push worked as expected if you were not doing a rc or dev tag
|
||||
- `npx -y @google/gemini-cli@<release tag> --version` to validate the tag pushed appropriately
|
||||
- _This is destructive locally_ `npm uninstall @google/gemini-cli && npm uninstall -g @google/gemini-cli && npm cache clean --force && npm install @google/gemini-cli@<version>`
|
||||
- Smoke testing a basic run through of exercising a few llm commands and tools is recommended to ensure that the packages are working as expected. We'll codify this more in the future.
|
||||
|
||||
## Local Testing and Validation: Changes to the Packaging and Publishing Process
|
||||
|
||||
If you need to test the release process without actually publishing to NPM or creating a public GitHub release, you can trigger the workflow manually from the GitHub UI.
|
||||
|
||||
1. Go to the [Actions tab](https://github.com/google-gemini/gemini-cli/actions/workflows/release.yml) of the repository.
|
||||
2. Click on the "Run workflow" dropdown.
|
||||
3. Leave the `dry_run` option checked (`true`).
|
||||
4. Click the "Run workflow" button.
|
||||
|
||||
This will run the entire release process but will skip the `npm publish` and `gh release create` steps. You can inspect the workflow logs to ensure everything is working as expected.
|
||||
|
||||
It is crucial to test any changes to the packaging and publishing process locally before committing them. This ensures that the packages will be published correctly and that they will work as expected when installed by a user.
|
||||
|
||||
To validate your changes, you can perform a dry run of the publishing process. This will simulate the publishing process without actually publishing the packages to the npm registry.
|
||||
|
||||
```bash
|
||||
npm_package_version=9.9.9 SANDBOX_IMAGE_REGISTRY="registry" SANDBOX_IMAGE_NAME="thename" npm run publish:npm --dry-run
|
||||
```
|
||||
|
||||
This command will do the following:
|
||||
|
||||
1. Build all the packages.
|
||||
2. Run all the prepublish scripts.
|
||||
3. Create the package tarballs that would be published to npm.
|
||||
4. Print a summary of the packages that would be published.
|
||||
|
||||
You can then inspect the generated tarballs to ensure that they contain the correct files and that the `package.json` files have been updated correctly. The tarballs will be created in the root of each package's directory (e.g., `packages/cli/google-gemini-cli-0.1.6.tgz`).
|
||||
|
||||
By performing a dry run, you can be confident that your changes to the packaging process are correct and that the packages will be published successfully.
|
||||
|
||||
## Release Deep Dive
|
||||
|
||||
The main goal of the release process is to take the source code from the packages/ directory, build it, and assemble a
|
||||
clean, self-contained package in a temporary `bundle` directory at the root of the project. This `bundle` directory is what
|
||||
actually gets published to NPM.
|
||||
|
||||
Here are the key stages:
|
||||
|
||||
Stage 1: Pre-Release Sanity Checks and Versioning
|
||||
|
||||
- What happens: Before any files are moved, the process ensures the project is in a good state. This involves running tests,
|
||||
linting, and type-checking (npm run preflight). The version number in the root package.json and packages/cli/package.json
|
||||
is updated to the new release version.
|
||||
- Why: This guarantees that only high-quality, working code is released. Versioning is the first step to signify a new
|
||||
release.
|
||||
|
||||
Stage 2: Building the Source Code
|
||||
|
||||
- What happens: The TypeScript source code in packages/core/src and packages/cli/src is compiled into JavaScript.
|
||||
- File movement:
|
||||
- packages/core/src/\*_/_.ts -> compiled to -> packages/core/dist/
|
||||
- packages/cli/src/\*_/_.ts -> compiled to -> packages/cli/dist/
|
||||
- Why: The TypeScript code written during development needs to be converted into plain JavaScript that can be run by
|
||||
Node.js. The core package is built first as the cli package depends on it.
|
||||
|
||||
Stage 3: Assembling the Final Publishable Package
|
||||
|
||||
This is the most critical stage where files are moved and transformed into their final state for publishing. A temporary
|
||||
`bundle` folder is created at the project root to house the final package contents.
|
||||
|
||||
1. The `package.json` is Transformed:
|
||||
- What happens: The package.json from packages/cli/ is read, modified, and written into the root `bundle`/ directory.
|
||||
- File movement: packages/cli/package.json -> (in-memory transformation) -> `bundle`/package.json
|
||||
- Why: The final package.json must be different from the one used in development. Key changes include:
|
||||
- Removing devDependencies.
|
||||
- Removing workspace-specific "dependencies": { "@gemini-cli/core": "workspace:\*" } and ensuring the core code is
|
||||
bundled directly into the final JavaScript file.
|
||||
- Ensuring the bin, main, and files fields point to the correct locations within the final package structure.
|
||||
|
||||
2. The JavaScript Bundle is Created:
|
||||
- What happens: The built JavaScript from both packages/core/dist and packages/cli/dist are bundled into a single,
|
||||
executable JavaScript file.
|
||||
- File movement: packages/cli/dist/index.js + packages/core/dist/index.js -> (bundled by esbuild) -> `bundle`/gemini.js (or a
|
||||
similar name).
|
||||
- Why: This creates a single, optimized file that contains all the necessary application code. It simplifies the package
|
||||
by removing the need for the core package to be a separate dependency on NPM, as its code is now included directly.
|
||||
|
||||
3. Static and Supporting Files are Copied:
|
||||
- What happens: Essential files that are not part of the source code but are required for the package to work correctly
|
||||
or be well-described are copied into the `bundle` directory.
|
||||
- File movement:
|
||||
- README.md -> `bundle`/README.md
|
||||
- LICENSE -> `bundle`/LICENSE
|
||||
- packages/cli/src/utils/\*.sb (sandbox profiles) -> `bundle`/
|
||||
- Why:
|
||||
- The README.md and LICENSE are standard files that should be included in any NPM package.
|
||||
- The sandbox profiles (.sb files) are critical runtime assets required for the CLI's sandboxing feature to
|
||||
function. They must be located next to the final executable.
|
||||
|
||||
Stage 4: Publishing to NPM
|
||||
|
||||
- What happens: The npm publish command is run from inside the root `bundle` directory.
|
||||
- Why: By running npm publish from within the `bundle` directory, only the files we carefully assembled in Stage 3 are uploaded
|
||||
to the NPM registry. This prevents any source code, test files, or development configurations from being accidentally
|
||||
published, resulting in a clean and minimal package for users.
|
||||
|
||||
Summary of File Flow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph "Source Files"
|
||||
A["packages/core/src/*.ts<br/>packages/cli/src/*.ts"]
|
||||
B["packages/cli/package.json"]
|
||||
C["README.md<br/>LICENSE<br/>packages/cli/src/utils/*.sb"]
|
||||
end
|
||||
|
||||
subgraph "Process"
|
||||
D(Build)
|
||||
E(Transform)
|
||||
F(Assemble)
|
||||
G(Publish)
|
||||
end
|
||||
|
||||
subgraph "Artifacts"
|
||||
H["Bundled JS"]
|
||||
I["Final package.json"]
|
||||
J["bundle/"]
|
||||
end
|
||||
|
||||
subgraph "Destination"
|
||||
K["NPM Registry"]
|
||||
end
|
||||
|
||||
A --> D --> H
|
||||
B --> E --> I
|
||||
C --> F
|
||||
H --> F
|
||||
I --> F
|
||||
F --> J
|
||||
J --> G --> K
|
||||
```
|
||||
|
||||
This process ensures that the final published artifact is a purpose-built, clean, and efficient representation of the
|
||||
project, rather than a direct copy of the development workspace.
|
||||
@@ -133,28 +133,6 @@ Focus on creating clear, comprehensive documentation that helps both
|
||||
new contributors and end users understand the project.
|
||||
```
|
||||
|
||||
## Using Subagents Effectively
|
||||
|
||||
### Automatic Delegation
|
||||
|
||||
Qwen Code proactively delegates tasks based on:
|
||||
|
||||
- The task description in your request
|
||||
- The description field in subagent configurations
|
||||
- Current context and available tools
|
||||
|
||||
To encourage more proactive subagent use, include phrases like "use PROACTIVELY" or "MUST BE USED" in your description field.
|
||||
|
||||
### Explicit Invocation
|
||||
|
||||
Request a specific subagent by mentioning it in your command:
|
||||
|
||||
```
|
||||
> Let the testing-expert subagent create unit tests for the payment module
|
||||
> Have the documentation-writer subagent update the API reference
|
||||
> Get the react-specialist subagent to optimize this component's performance
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Development Workflow Agents
|
||||
|
||||
@@ -75,7 +75,7 @@ This guide provides solutions to common issues and debugging tips, including top
|
||||
|
||||
## Exit Codes
|
||||
|
||||
The Qwen Code uses specific exit codes to indicate the reason for termination. This is especially useful for scripting and automation.
|
||||
The Gemini CLI uses specific exit codes to indicate the reason for termination. This is especially useful for scripting and automation.
|
||||
|
||||
| Exit Code | Error Type | Description |
|
||||
| --------- | -------------------------- | --------------------------------------------------------------------------------------------------- |
|
||||
|
||||
12
package-lock.json
generated
12
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.0.11",
|
||||
"version": "0.0.12-nightly.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.0.11",
|
||||
"version": "0.0.12-nightly.0",
|
||||
"workspaces": [
|
||||
"packages/*"
|
||||
],
|
||||
@@ -13454,7 +13454,7 @@
|
||||
},
|
||||
"packages/cli": {
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.0.11",
|
||||
"version": "0.0.12-nightly.0",
|
||||
"dependencies": {
|
||||
"@google/genai": "1.9.0",
|
||||
"@iarna/toml": "^2.2.5",
|
||||
@@ -13662,7 +13662,7 @@
|
||||
},
|
||||
"packages/core": {
|
||||
"name": "@qwen-code/qwen-code-core",
|
||||
"version": "0.0.11",
|
||||
"version": "0.0.12-nightly.0",
|
||||
"dependencies": {
|
||||
"@google/genai": "1.13.0",
|
||||
"@lvce-editor/ripgrep": "^1.6.0",
|
||||
@@ -13788,7 +13788,7 @@
|
||||
},
|
||||
"packages/test-utils": {
|
||||
"name": "@qwen-code/qwen-code-test-utils",
|
||||
"version": "0.0.11",
|
||||
"version": "0.0.12-nightly.0",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"devDependencies": {
|
||||
@@ -13800,7 +13800,7 @@
|
||||
},
|
||||
"packages/vscode-ide-companion": {
|
||||
"name": "qwen-code-vscode-ide-companion",
|
||||
"version": "0.0.11",
|
||||
"version": "0.0.12-nightly.0",
|
||||
"license": "LICENSE",
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.15.1",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.0.11",
|
||||
"version": "0.0.12-nightly.0",
|
||||
"engines": {
|
||||
"node": ">=20.0.0"
|
||||
},
|
||||
@@ -13,7 +13,7 @@
|
||||
"url": "git+https://github.com/QwenLM/qwen-code.git"
|
||||
},
|
||||
"config": {
|
||||
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.0.11"
|
||||
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.0.12-nightly.0"
|
||||
},
|
||||
"scripts": {
|
||||
"start": "node scripts/start.js",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.0.11",
|
||||
"version": "0.0.12-nightly.0",
|
||||
"description": "Qwen Code",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
@@ -25,7 +25,7 @@
|
||||
"dist"
|
||||
],
|
||||
"config": {
|
||||
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.0.11"
|
||||
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.0.12-nightly.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"@google/genai": "1.9.0",
|
||||
|
||||
@@ -14,7 +14,7 @@ import { enableCommand } from './extensions/enable.js';
|
||||
|
||||
export const extensionsCommand: CommandModule = {
|
||||
command: 'extensions <command>',
|
||||
describe: 'Manage Qwen Code extensions.',
|
||||
describe: 'Manage Gemini CLI extensions.',
|
||||
builder: (yargs) =>
|
||||
yargs
|
||||
.command(installCommand)
|
||||
|
||||
@@ -629,7 +629,6 @@ export async function loadCliConfig(
|
||||
shouldUseNodePtyShell: settings.tools?.usePty,
|
||||
skipNextSpeakerCheck: settings.model?.skipNextSpeakerCheck,
|
||||
enablePromptCompletion: settings.general?.enablePromptCompletion ?? false,
|
||||
skipLoopDetection: settings.skipLoopDetection ?? false,
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
@@ -330,14 +330,14 @@ describe('installExtension', () => {
|
||||
);
|
||||
});
|
||||
|
||||
it('should throw an error and cleanup if qwen-extension.json is missing', async () => {
|
||||
it('should throw an error and cleanup if gemini-extension.json is missing', async () => {
|
||||
const sourceExtDir = path.join(tempHomeDir, 'bad-extension');
|
||||
fs.mkdirSync(sourceExtDir, { recursive: true });
|
||||
|
||||
await expect(
|
||||
installExtension({ source: sourceExtDir, type: 'local' }),
|
||||
).rejects.toThrow(
|
||||
`Invalid extension at ${sourceExtDir}. Please make sure it has a valid qwen-extension.json file.`,
|
||||
`Invalid extension at ${sourceExtDir}. Please make sure it has a valid gemini-extension.json file.`,
|
||||
);
|
||||
|
||||
const targetExtDir = path.join(userExtensionsDir, 'bad-extension');
|
||||
@@ -345,8 +345,8 @@ describe('installExtension', () => {
|
||||
});
|
||||
|
||||
it('should install an extension from a git URL', async () => {
|
||||
const gitUrl = 'https://github.com/google/qwen-extensions.git';
|
||||
const extensionName = 'qwen-extensions';
|
||||
const gitUrl = 'https://github.com/google/gemini-extensions.git';
|
||||
const extensionName = 'gemini-extensions';
|
||||
const targetExtDir = path.join(userExtensionsDir, extensionName);
|
||||
const metadataPath = path.join(targetExtDir, INSTALL_METADATA_FILENAME);
|
||||
|
||||
@@ -555,8 +555,8 @@ describe('updateExtension', () => {
|
||||
|
||||
it('should update a git-installed extension', async () => {
|
||||
// 1. "Install" an extension
|
||||
const gitUrl = 'https://github.com/google/qwen-extensions.git';
|
||||
const extensionName = 'qwen-extensions';
|
||||
const gitUrl = 'https://github.com/google/gemini-extensions.git';
|
||||
const extensionName = 'gemini-extensions';
|
||||
const targetExtDir = path.join(userExtensionsDir, extensionName);
|
||||
const metadataPath = path.join(targetExtDir, INSTALL_METADATA_FILENAME);
|
||||
|
||||
|
||||
@@ -71,7 +71,9 @@ export class ExtensionStorage {
|
||||
}
|
||||
|
||||
static async createTmpDir(): Promise<string> {
|
||||
return await fs.promises.mkdtemp(path.join(os.tmpdir(), 'qwen-extension'));
|
||||
return await fs.promises.mkdtemp(
|
||||
path.join(os.tmpdir(), 'gemini-extension'),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -353,11 +355,11 @@ export async function installExtension(
|
||||
const newExtension = loadExtension(localSourcePath);
|
||||
if (!newExtension) {
|
||||
throw new Error(
|
||||
`Invalid extension at ${installMetadata.source}. Please make sure it has a valid qwen-extension.json file.`,
|
||||
`Invalid extension at ${installMetadata.source}. Please make sure it has a valid gemini-extension.json file.`,
|
||||
);
|
||||
}
|
||||
|
||||
// ~/.qwen/extensions/{ExtensionConfig.name}.
|
||||
// ~/.gemini/extensions/{ExtensionConfig.name}.
|
||||
newExtensionName = newExtension.config.name;
|
||||
const extensionStorage = new ExtensionStorage(newExtensionName);
|
||||
const destinationPath = extensionStorage.getExtensionDir();
|
||||
@@ -453,7 +455,7 @@ export async function updateExtension(
|
||||
}
|
||||
if (!extension.installMetadata) {
|
||||
throw new Error(
|
||||
`Extension cannot be updated because it is missing the .qwen-extension.install.json file. To update manually, uninstall and then reinstall the updated version.`,
|
||||
`Extension cannot be updated because it is missing the .gemini-extension.install.json file. To update manually, uninstall and then reinstall the updated version.`,
|
||||
);
|
||||
}
|
||||
const originalVersion = extension.config.version;
|
||||
|
||||
@@ -741,16 +741,6 @@ export const SETTINGS_SCHEMA = {
|
||||
description: 'Enable extension management features.',
|
||||
showInDialog: false,
|
||||
},
|
||||
visionModelPreview: {
|
||||
type: 'boolean',
|
||||
label: 'Vision Model Preview',
|
||||
category: 'Experimental',
|
||||
requiresRestart: false,
|
||||
default: false,
|
||||
description:
|
||||
'Enable vision model support and auto-switching functionality. When disabled, vision models like qwen-vl-max-latest will be hidden and auto-switching will not occur.',
|
||||
showInDialog: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
@@ -873,15 +863,6 @@ export const SETTINGS_SCHEMA = {
|
||||
description: 'Skip the next speaker check.',
|
||||
showInDialog: true,
|
||||
},
|
||||
skipLoopDetection: {
|
||||
type: 'boolean',
|
||||
label: 'Skip Loop Detection',
|
||||
category: 'General',
|
||||
requiresRestart: false,
|
||||
default: false,
|
||||
description: 'Disable all loop detection checks (streaming and LLM).',
|
||||
showInDialog: true,
|
||||
},
|
||||
enableWelcomeBack: {
|
||||
type: 'boolean',
|
||||
label: 'Enable Welcome Back',
|
||||
|
||||
@@ -56,13 +56,6 @@ vi.mock('../ui/commands/mcpCommand.js', () => ({
|
||||
kind: 'BUILT_IN',
|
||||
},
|
||||
}));
|
||||
vi.mock('../ui/commands/modelCommand.js', () => ({
|
||||
modelCommand: {
|
||||
name: 'model',
|
||||
description: 'Model command',
|
||||
kind: 'BUILT_IN',
|
||||
},
|
||||
}));
|
||||
|
||||
describe('BuiltinCommandLoader', () => {
|
||||
let mockConfig: Config;
|
||||
@@ -133,8 +126,5 @@ describe('BuiltinCommandLoader', () => {
|
||||
|
||||
const mcpCmd = commands.find((c) => c.name === 'mcp');
|
||||
expect(mcpCmd).toBeDefined();
|
||||
|
||||
const modelCmd = commands.find((c) => c.name === 'model');
|
||||
expect(modelCmd).toBeDefined();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -35,7 +35,6 @@ import { settingsCommand } from '../ui/commands/settingsCommand.js';
|
||||
import { vimCommand } from '../ui/commands/vimCommand.js';
|
||||
import { setupGithubCommand } from '../ui/commands/setupGithubCommand.js';
|
||||
import { terminalSetupCommand } from '../ui/commands/terminalSetupCommand.js';
|
||||
import { modelCommand } from '../ui/commands/modelCommand.js';
|
||||
import { agentsCommand } from '../ui/commands/agentsCommand.js';
|
||||
|
||||
/**
|
||||
@@ -72,7 +71,6 @@ export class BuiltinCommandLoader implements ICommandLoader {
|
||||
initCommand,
|
||||
mcpCommand,
|
||||
memoryCommand,
|
||||
modelCommand,
|
||||
privacyCommand,
|
||||
quitCommand,
|
||||
quitConfirmCommand,
|
||||
|
||||
@@ -525,7 +525,7 @@ describe('FileCommandLoader', () => {
|
||||
).getProjectCommandsDir();
|
||||
const extensionDir = path.join(
|
||||
process.cwd(),
|
||||
'.qwen/extensions/test-ext',
|
||||
'.gemini/extensions/test-ext',
|
||||
);
|
||||
|
||||
mock({
|
||||
@@ -536,7 +536,7 @@ describe('FileCommandLoader', () => {
|
||||
'project.toml': 'prompt = "Project command"',
|
||||
},
|
||||
[extensionDir]: {
|
||||
'qwen-extension.json': JSON.stringify({
|
||||
'gemini-extension.json': JSON.stringify({
|
||||
name: 'test-ext',
|
||||
version: '1.0.0',
|
||||
}),
|
||||
@@ -576,12 +576,12 @@ describe('FileCommandLoader', () => {
|
||||
).getProjectCommandsDir();
|
||||
const extensionDir = path.join(
|
||||
process.cwd(),
|
||||
'.qwen/extensions/test-ext',
|
||||
'.gemini/extensions/test-ext',
|
||||
);
|
||||
|
||||
mock({
|
||||
[extensionDir]: {
|
||||
'qwen-extension.json': JSON.stringify({
|
||||
'gemini-extension.json': JSON.stringify({
|
||||
name: 'test-ext',
|
||||
version: '1.0.0',
|
||||
}),
|
||||
@@ -670,16 +670,16 @@ describe('FileCommandLoader', () => {
|
||||
it('only loads commands from active extensions', async () => {
|
||||
const extensionDir1 = path.join(
|
||||
process.cwd(),
|
||||
'.qwen/extensions/active-ext',
|
||||
'.gemini/extensions/active-ext',
|
||||
);
|
||||
const extensionDir2 = path.join(
|
||||
process.cwd(),
|
||||
'.qwen/extensions/inactive-ext',
|
||||
'.gemini/extensions/inactive-ext',
|
||||
);
|
||||
|
||||
mock({
|
||||
[extensionDir1]: {
|
||||
'qwen-extension.json': JSON.stringify({
|
||||
'gemini-extension.json': JSON.stringify({
|
||||
name: 'active-ext',
|
||||
version: '1.0.0',
|
||||
}),
|
||||
@@ -688,7 +688,7 @@ describe('FileCommandLoader', () => {
|
||||
},
|
||||
},
|
||||
[extensionDir2]: {
|
||||
'qwen-extension.json': JSON.stringify({
|
||||
'gemini-extension.json': JSON.stringify({
|
||||
name: 'inactive-ext',
|
||||
version: '1.0.0',
|
||||
}),
|
||||
@@ -727,12 +727,12 @@ describe('FileCommandLoader', () => {
|
||||
it('handles missing extension commands directory gracefully', async () => {
|
||||
const extensionDir = path.join(
|
||||
process.cwd(),
|
||||
'.qwen/extensions/no-commands',
|
||||
'.gemini/extensions/no-commands',
|
||||
);
|
||||
|
||||
mock({
|
||||
[extensionDir]: {
|
||||
'qwen-extension.json': JSON.stringify({
|
||||
'gemini-extension.json': JSON.stringify({
|
||||
name: 'no-commands',
|
||||
version: '1.0.0',
|
||||
}),
|
||||
@@ -757,11 +757,11 @@ describe('FileCommandLoader', () => {
|
||||
});
|
||||
|
||||
it('handles nested command structure in extensions', async () => {
|
||||
const extensionDir = path.join(process.cwd(), '.qwen/extensions/a');
|
||||
const extensionDir = path.join(process.cwd(), '.gemini/extensions/a');
|
||||
|
||||
mock({
|
||||
[extensionDir]: {
|
||||
'qwen-extension.json': JSON.stringify({
|
||||
'gemini-extension.json': JSON.stringify({
|
||||
name: 'a',
|
||||
version: '1.0.0',
|
||||
}),
|
||||
|
||||
@@ -53,17 +53,6 @@ import { FolderTrustDialog } from './components/FolderTrustDialog.js';
|
||||
import { ShellConfirmationDialog } from './components/ShellConfirmationDialog.js';
|
||||
import { QuitConfirmationDialog } from './components/QuitConfirmationDialog.js';
|
||||
import { RadioButtonSelect } from './components/shared/RadioButtonSelect.js';
|
||||
import { ModelSelectionDialog } from './components/ModelSelectionDialog.js';
|
||||
import {
|
||||
ModelSwitchDialog,
|
||||
type VisionSwitchOutcome,
|
||||
} from './components/ModelSwitchDialog.js';
|
||||
import {
|
||||
getOpenAIAvailableModelFromEnv,
|
||||
getFilteredQwenModels,
|
||||
type AvailableModel,
|
||||
} from './models/availableModels.js';
|
||||
import { processVisionSwitchOutcome } from './hooks/useVisionAutoSwitch.js';
|
||||
import {
|
||||
AgentCreationWizard,
|
||||
AgentsManagerDialog,
|
||||
@@ -153,11 +142,9 @@ function isToolExecuting(pendingHistoryItems: HistoryItemWithoutId[]) {
|
||||
|
||||
export const AppWrapper = (props: AppProps) => {
|
||||
const kittyProtocolStatus = useKittyKeyboardProtocol();
|
||||
const nodeMajorVersion = parseInt(process.versions.node.split('.')[0], 10);
|
||||
return (
|
||||
<KeypressProvider
|
||||
kittyProtocolEnabled={kittyProtocolStatus.enabled}
|
||||
pasteWorkaround={process.platform === 'win32' || nodeMajorVersion < 20}
|
||||
config={props.config}
|
||||
debugKeystrokeLogging={
|
||||
props.settings.merged.general?.debugKeystrokeLogging
|
||||
@@ -259,20 +246,6 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => {
|
||||
onWorkspaceMigrationDialogClose,
|
||||
} = useWorkspaceMigration(settings);
|
||||
|
||||
// Model selection dialog states
|
||||
const [isModelSelectionDialogOpen, setIsModelSelectionDialogOpen] =
|
||||
useState(false);
|
||||
const [isVisionSwitchDialogOpen, setIsVisionSwitchDialogOpen] =
|
||||
useState(false);
|
||||
const [visionSwitchResolver, setVisionSwitchResolver] = useState<{
|
||||
resolve: (result: {
|
||||
modelOverride?: string;
|
||||
persistSessionModel?: string;
|
||||
showGuidance?: boolean;
|
||||
}) => void;
|
||||
reject: () => void;
|
||||
} | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
const unsubscribe = ideContext.subscribeToIdeContext(setIdeContextState);
|
||||
// Set the initial value
|
||||
@@ -615,75 +588,6 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => {
|
||||
openAuthDialog();
|
||||
}, [openAuthDialog, setAuthError]);
|
||||
|
||||
// Vision switch handler for auto-switch functionality
|
||||
const handleVisionSwitchRequired = useCallback(
|
||||
async (_query: unknown) =>
|
||||
new Promise<{
|
||||
modelOverride?: string;
|
||||
persistSessionModel?: string;
|
||||
showGuidance?: boolean;
|
||||
}>((resolve, reject) => {
|
||||
setVisionSwitchResolver({ resolve, reject });
|
||||
setIsVisionSwitchDialogOpen(true);
|
||||
}),
|
||||
[],
|
||||
);
|
||||
|
||||
const handleVisionSwitchSelect = useCallback(
|
||||
(outcome: VisionSwitchOutcome) => {
|
||||
setIsVisionSwitchDialogOpen(false);
|
||||
if (visionSwitchResolver) {
|
||||
const result = processVisionSwitchOutcome(outcome);
|
||||
visionSwitchResolver.resolve(result);
|
||||
setVisionSwitchResolver(null);
|
||||
}
|
||||
},
|
||||
[visionSwitchResolver],
|
||||
);
|
||||
|
||||
const handleModelSelectionOpen = useCallback(() => {
|
||||
setIsModelSelectionDialogOpen(true);
|
||||
}, []);
|
||||
|
||||
const handleModelSelectionClose = useCallback(() => {
|
||||
setIsModelSelectionDialogOpen(false);
|
||||
}, []);
|
||||
|
||||
const handleModelSelect = useCallback(
|
||||
(modelId: string) => {
|
||||
config.setModel(modelId);
|
||||
setCurrentModel(modelId);
|
||||
setIsModelSelectionDialogOpen(false);
|
||||
addItem(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: `Switched model to \`${modelId}\` for this session.`,
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
},
|
||||
[config, setCurrentModel, addItem],
|
||||
);
|
||||
|
||||
const getAvailableModelsForCurrentAuth = useCallback((): AvailableModel[] => {
|
||||
const contentGeneratorConfig = config.getContentGeneratorConfig();
|
||||
if (!contentGeneratorConfig) return [];
|
||||
|
||||
const visionModelPreviewEnabled =
|
||||
settings.merged.experimental?.visionModelPreview ?? false;
|
||||
|
||||
switch (contentGeneratorConfig.authType) {
|
||||
case AuthType.QWEN_OAUTH:
|
||||
return getFilteredQwenModels(visionModelPreviewEnabled);
|
||||
case AuthType.USE_OPENAI: {
|
||||
const openAIModel = getOpenAIAvailableModelFromEnv();
|
||||
return openAIModel ? [openAIModel] : [];
|
||||
}
|
||||
default:
|
||||
return [];
|
||||
}
|
||||
}, [config, settings.merged.experimental?.visionModelPreview]);
|
||||
|
||||
// Core hooks and processors
|
||||
const {
|
||||
vimEnabled: vimModeEnabled,
|
||||
@@ -714,7 +618,6 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => {
|
||||
setQuittingMessages,
|
||||
openPrivacyNotice,
|
||||
openSettingsDialog,
|
||||
handleModelSelectionOpen,
|
||||
openSubagentCreateDialog,
|
||||
openAgentsManagerDialog,
|
||||
toggleVimEnabled,
|
||||
@@ -759,18 +662,10 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => {
|
||||
setModelSwitchedFromQuotaError,
|
||||
refreshStatic,
|
||||
() => cancelHandlerRef.current(),
|
||||
settings.merged.experimental?.visionModelPreview ?? false,
|
||||
handleVisionSwitchRequired,
|
||||
);
|
||||
|
||||
const pendingHistoryItems = useMemo(
|
||||
() =>
|
||||
[...pendingSlashCommandHistoryItems, ...pendingGeminiHistoryItems].map(
|
||||
(item, index) => ({
|
||||
...item,
|
||||
id: index,
|
||||
}),
|
||||
),
|
||||
() => [...pendingSlashCommandHistoryItems, ...pendingGeminiHistoryItems],
|
||||
[pendingSlashCommandHistoryItems, pendingGeminiHistoryItems],
|
||||
);
|
||||
|
||||
@@ -1131,8 +1026,6 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => {
|
||||
!isAuthDialogOpen &&
|
||||
!isThemeDialogOpen &&
|
||||
!isEditorDialogOpen &&
|
||||
!isModelSelectionDialogOpen &&
|
||||
!isVisionSwitchDialogOpen &&
|
||||
!isSubagentCreateDialogOpen &&
|
||||
!showPrivacyNotice &&
|
||||
!showWelcomeBackDialog &&
|
||||
@@ -1154,8 +1047,6 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => {
|
||||
showWelcomeBackDialog,
|
||||
welcomeBackChoice,
|
||||
geminiClient,
|
||||
isModelSelectionDialogOpen,
|
||||
isVisionSwitchDialogOpen,
|
||||
]);
|
||||
|
||||
if (quittingMessages) {
|
||||
@@ -1228,14 +1119,16 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => {
|
||||
</Static>
|
||||
<OverflowProvider>
|
||||
<Box ref={pendingHistoryItemRef} flexDirection="column">
|
||||
{pendingHistoryItems.map((item) => (
|
||||
{pendingHistoryItems.map((item, i) => (
|
||||
<HistoryItemDisplay
|
||||
key={item.id}
|
||||
key={i}
|
||||
availableTerminalHeight={
|
||||
constrainHeight ? availableTerminalHeight : undefined
|
||||
}
|
||||
terminalWidth={mainAreaWidth}
|
||||
item={item}
|
||||
// TODO(taehykim): It seems like references to ids aren't necessary in
|
||||
// HistoryItemDisplay. Refactor later. Use a fake id for now.
|
||||
item={{ ...item, id: 0 }}
|
||||
isPending={true}
|
||||
config={config}
|
||||
isFocused={!isEditorDialogOpen}
|
||||
@@ -1423,15 +1316,6 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => {
|
||||
onExit={exitEditorDialog}
|
||||
/>
|
||||
</Box>
|
||||
) : isModelSelectionDialogOpen ? (
|
||||
<ModelSelectionDialog
|
||||
availableModels={getAvailableModelsForCurrentAuth()}
|
||||
currentModel={currentModel}
|
||||
onSelect={handleModelSelect}
|
||||
onCancel={handleModelSelectionClose}
|
||||
/>
|
||||
) : isVisionSwitchDialogOpen ? (
|
||||
<ModelSwitchDialog onSelect={handleVisionSwitchSelect} />
|
||||
) : showPrivacyNotice ? (
|
||||
<PrivacyNotice
|
||||
onExit={() => setShowPrivacyNotice(false)}
|
||||
|
||||
@@ -53,7 +53,7 @@ describe('initCommand', () => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it(`should ask for confirmation if ${DEFAULT_CONTEXT_FILENAME} already exists and is non-empty`, async () => {
|
||||
it(`should inform the user if ${DEFAULT_CONTEXT_FILENAME} already exists and is non-empty`, async () => {
|
||||
// Arrange: Simulate that the file exists
|
||||
vi.mocked(fs.existsSync).mockReturnValue(true);
|
||||
vi.spyOn(fs, 'readFileSync').mockReturnValue('# Existing content');
|
||||
@@ -61,15 +61,13 @@ describe('initCommand', () => {
|
||||
// Act: Run the command's action
|
||||
const result = await initCommand.action!(mockContext, '');
|
||||
|
||||
// Assert: Check for the correct confirmation request
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
type: 'confirm_action',
|
||||
prompt: expect.anything(), // React element, not a string
|
||||
originalInvocation: expect.anything(),
|
||||
}),
|
||||
);
|
||||
// Assert: Ensure no file was written yet
|
||||
// Assert: Check for the correct informational message
|
||||
expect(result).toEqual({
|
||||
type: 'message',
|
||||
messageType: 'info',
|
||||
content: `A ${DEFAULT_CONTEXT_FILENAME} file already exists in this directory. No changes were made.`,
|
||||
});
|
||||
// Assert: Ensure no file was written
|
||||
expect(fs.writeFileSync).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
@@ -93,13 +91,9 @@ describe('initCommand', () => {
|
||||
);
|
||||
|
||||
// Assert: Check that the correct prompt is submitted
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
type: 'submit_prompt',
|
||||
content: expect.stringContaining(
|
||||
'You are Qwen Code, an interactive CLI agent',
|
||||
),
|
||||
}),
|
||||
expect(result.type).toBe('submit_prompt');
|
||||
expect(result.content).toContain(
|
||||
'You are Qwen Code, an interactive CLI agent',
|
||||
);
|
||||
});
|
||||
|
||||
@@ -110,43 +104,7 @@ describe('initCommand', () => {
|
||||
const result = await initCommand.action!(mockContext, '');
|
||||
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(geminiMdPath, '', 'utf8');
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
type: 'submit_prompt',
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it(`should regenerate ${DEFAULT_CONTEXT_FILENAME} when overwrite is confirmed`, async () => {
|
||||
// Arrange: Simulate that the file exists and overwrite is confirmed
|
||||
vi.mocked(fs.existsSync).mockReturnValue(true);
|
||||
vi.spyOn(fs, 'readFileSync').mockReturnValue('# Existing content');
|
||||
mockContext.overwriteConfirmed = true;
|
||||
|
||||
// Act: Run the command's action
|
||||
const result = await initCommand.action!(mockContext, '');
|
||||
|
||||
// Assert: Check that writeFileSync was called correctly
|
||||
expect(fs.writeFileSync).toHaveBeenCalledWith(geminiMdPath, '', 'utf8');
|
||||
|
||||
// Assert: Check that an informational message was added to the UI
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: 'info',
|
||||
text: `Empty ${DEFAULT_CONTEXT_FILENAME} created. Now analyzing the project to populate it.`,
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
|
||||
// Assert: Check that the correct prompt is submitted
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
type: 'submit_prompt',
|
||||
content: expect.stringContaining(
|
||||
'You are Qwen Code, an interactive CLI agent',
|
||||
),
|
||||
}),
|
||||
);
|
||||
expect(result.type).toBe('submit_prompt');
|
||||
});
|
||||
|
||||
it('should return an error if config is not available', async () => {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
@@ -13,8 +13,6 @@ import type {
|
||||
} from './types.js';
|
||||
import { getCurrentGeminiMdFilename } from '@qwen-code/qwen-code-core';
|
||||
import { CommandKind } from './types.js';
|
||||
import { Text } from 'ink';
|
||||
import React from 'react';
|
||||
|
||||
export const initCommand: SlashCommand = {
|
||||
name: 'init',
|
||||
@@ -37,27 +35,15 @@ export const initCommand: SlashCommand = {
|
||||
|
||||
try {
|
||||
if (fs.existsSync(contextFilePath)) {
|
||||
// If file exists but is empty (or whitespace), continue to initialize
|
||||
// If file exists but is empty (or whitespace), continue to initialize; otherwise, bail out
|
||||
try {
|
||||
const existing = fs.readFileSync(contextFilePath, 'utf8');
|
||||
if (existing && existing.trim().length > 0) {
|
||||
// File exists and has content - ask for confirmation to overwrite
|
||||
if (!context.overwriteConfirmed) {
|
||||
return {
|
||||
type: 'confirm_action',
|
||||
// TODO: Move to .tsx file to use JSX syntax instead of React.createElement
|
||||
// For now, using React.createElement to maintain .ts compatibility for PR review
|
||||
prompt: React.createElement(
|
||||
Text,
|
||||
null,
|
||||
`A ${contextFileName} file already exists in this directory. Do you want to regenerate it?`,
|
||||
),
|
||||
originalInvocation: {
|
||||
raw: context.invocation?.raw || '/init',
|
||||
},
|
||||
};
|
||||
}
|
||||
// User confirmed overwrite, continue with regeneration
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'info',
|
||||
content: `A ${contextFileName} file already exists in this directory. No changes were made.`,
|
||||
};
|
||||
}
|
||||
} catch {
|
||||
// If we fail to read, conservatively proceed to (re)create the file
|
||||
|
||||
@@ -1,179 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, vi } from 'vitest';
|
||||
import { modelCommand } from './modelCommand.js';
|
||||
import { type CommandContext } from './types.js';
|
||||
import { createMockCommandContext } from '../../test-utils/mockCommandContext.js';
|
||||
import {
|
||||
AuthType,
|
||||
type ContentGeneratorConfig,
|
||||
type Config,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import * as availableModelsModule from '../models/availableModels.js';
|
||||
|
||||
// Mock the availableModels module
|
||||
vi.mock('../models/availableModels.js', () => ({
|
||||
AVAILABLE_MODELS_QWEN: [
|
||||
{ id: 'qwen3-coder-plus', label: 'qwen3-coder-plus' },
|
||||
{ id: 'qwen-vl-max-latest', label: 'qwen-vl-max', isVision: true },
|
||||
],
|
||||
getOpenAIAvailableModelFromEnv: vi.fn(),
|
||||
}));
|
||||
|
||||
// Helper function to create a mock config
|
||||
function createMockConfig(
|
||||
contentGeneratorConfig: ContentGeneratorConfig | null,
|
||||
): Partial<Config> {
|
||||
return {
|
||||
getContentGeneratorConfig: vi.fn().mockReturnValue(contentGeneratorConfig),
|
||||
};
|
||||
}
|
||||
|
||||
describe('modelCommand', () => {
|
||||
let mockContext: CommandContext;
|
||||
const mockGetOpenAIAvailableModelFromEnv = vi.mocked(
|
||||
availableModelsModule.getOpenAIAvailableModelFromEnv,
|
||||
);
|
||||
|
||||
beforeEach(() => {
|
||||
mockContext = createMockCommandContext();
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should have the correct name and description', () => {
|
||||
expect(modelCommand.name).toBe('model');
|
||||
expect(modelCommand.description).toBe('Switch the model for this session');
|
||||
});
|
||||
|
||||
it('should return error when config is not available', async () => {
|
||||
mockContext.services.config = null;
|
||||
|
||||
const result = await modelCommand.action!(mockContext, '');
|
||||
|
||||
expect(result).toEqual({
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: 'Configuration not available.',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return error when content generator config is not available', async () => {
|
||||
const mockConfig = createMockConfig(null);
|
||||
mockContext.services.config = mockConfig as Config;
|
||||
|
||||
const result = await modelCommand.action!(mockContext, '');
|
||||
|
||||
expect(result).toEqual({
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: 'Content generator configuration not available.',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return error when auth type is not available', async () => {
|
||||
const mockConfig = createMockConfig({
|
||||
model: 'test-model',
|
||||
authType: undefined,
|
||||
});
|
||||
mockContext.services.config = mockConfig as Config;
|
||||
|
||||
const result = await modelCommand.action!(mockContext, '');
|
||||
|
||||
expect(result).toEqual({
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: 'Authentication type not available.',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return dialog action for QWEN_OAUTH auth type', async () => {
|
||||
const mockConfig = createMockConfig({
|
||||
model: 'test-model',
|
||||
authType: AuthType.QWEN_OAUTH,
|
||||
});
|
||||
mockContext.services.config = mockConfig as Config;
|
||||
|
||||
const result = await modelCommand.action!(mockContext, '');
|
||||
|
||||
expect(result).toEqual({
|
||||
type: 'dialog',
|
||||
dialog: 'model',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return dialog action for USE_OPENAI auth type when model is available', async () => {
|
||||
mockGetOpenAIAvailableModelFromEnv.mockReturnValue({
|
||||
id: 'gpt-4',
|
||||
label: 'gpt-4',
|
||||
});
|
||||
|
||||
const mockConfig = createMockConfig({
|
||||
model: 'test-model',
|
||||
authType: AuthType.USE_OPENAI,
|
||||
});
|
||||
mockContext.services.config = mockConfig as Config;
|
||||
|
||||
const result = await modelCommand.action!(mockContext, '');
|
||||
|
||||
expect(result).toEqual({
|
||||
type: 'dialog',
|
||||
dialog: 'model',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return error for USE_OPENAI auth type when no model is available', async () => {
|
||||
mockGetOpenAIAvailableModelFromEnv.mockReturnValue(null);
|
||||
|
||||
const mockConfig = createMockConfig({
|
||||
model: 'test-model',
|
||||
authType: AuthType.USE_OPENAI,
|
||||
});
|
||||
mockContext.services.config = mockConfig as Config;
|
||||
|
||||
const result = await modelCommand.action!(mockContext, '');
|
||||
|
||||
expect(result).toEqual({
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content:
|
||||
'No models available for the current authentication type (openai).',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return error for unsupported auth types', async () => {
|
||||
const mockConfig = createMockConfig({
|
||||
model: 'test-model',
|
||||
authType: 'UNSUPPORTED_AUTH_TYPE' as AuthType,
|
||||
});
|
||||
mockContext.services.config = mockConfig as Config;
|
||||
|
||||
const result = await modelCommand.action!(mockContext, '');
|
||||
|
||||
expect(result).toEqual({
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content:
|
||||
'No models available for the current authentication type (UNSUPPORTED_AUTH_TYPE).',
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle undefined auth type', async () => {
|
||||
const mockConfig = createMockConfig({
|
||||
model: 'test-model',
|
||||
authType: undefined,
|
||||
});
|
||||
mockContext.services.config = mockConfig as Config;
|
||||
|
||||
const result = await modelCommand.action!(mockContext, '');
|
||||
|
||||
expect(result).toEqual({
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: 'Authentication type not available.',
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,88 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { AuthType } from '@qwen-code/qwen-code-core';
|
||||
import type {
|
||||
SlashCommand,
|
||||
CommandContext,
|
||||
OpenDialogActionReturn,
|
||||
MessageActionReturn,
|
||||
} from './types.js';
|
||||
import { CommandKind } from './types.js';
|
||||
import {
|
||||
AVAILABLE_MODELS_QWEN,
|
||||
getOpenAIAvailableModelFromEnv,
|
||||
type AvailableModel,
|
||||
} from '../models/availableModels.js';
|
||||
|
||||
function getAvailableModelsForAuthType(authType: AuthType): AvailableModel[] {
|
||||
switch (authType) {
|
||||
case AuthType.QWEN_OAUTH:
|
||||
return AVAILABLE_MODELS_QWEN;
|
||||
case AuthType.USE_OPENAI: {
|
||||
const openAIModel = getOpenAIAvailableModelFromEnv();
|
||||
return openAIModel ? [openAIModel] : [];
|
||||
}
|
||||
default:
|
||||
// For other auth types, return empty array for now
|
||||
// This can be expanded later according to the design doc
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
export const modelCommand: SlashCommand = {
|
||||
name: 'model',
|
||||
description: 'Switch the model for this session',
|
||||
kind: CommandKind.BUILT_IN,
|
||||
action: async (
|
||||
context: CommandContext,
|
||||
): Promise<OpenDialogActionReturn | MessageActionReturn> => {
|
||||
const { services } = context;
|
||||
const { config } = services;
|
||||
|
||||
if (!config) {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: 'Configuration not available.',
|
||||
};
|
||||
}
|
||||
|
||||
const contentGeneratorConfig = config.getContentGeneratorConfig();
|
||||
if (!contentGeneratorConfig) {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: 'Content generator configuration not available.',
|
||||
};
|
||||
}
|
||||
|
||||
const authType = contentGeneratorConfig.authType;
|
||||
if (!authType) {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: 'Authentication type not available.',
|
||||
};
|
||||
}
|
||||
|
||||
const availableModels = getAvailableModelsForAuthType(authType);
|
||||
|
||||
if (availableModels.length === 0) {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: `No models available for the current authentication type (${authType}).`,
|
||||
};
|
||||
}
|
||||
|
||||
// Trigger model selection dialog
|
||||
return {
|
||||
type: 'dialog',
|
||||
dialog: 'model',
|
||||
};
|
||||
},
|
||||
};
|
||||
@@ -116,7 +116,6 @@ export interface OpenDialogActionReturn {
|
||||
| 'editor'
|
||||
| 'privacy'
|
||||
| 'settings'
|
||||
| 'model'
|
||||
| 'subagent_create'
|
||||
| 'subagent_list';
|
||||
}
|
||||
|
||||
@@ -30,7 +30,7 @@ describe('FolderTrustDialog', () => {
|
||||
|
||||
expect(lastFrame()).toContain('Do you trust this folder?');
|
||||
expect(lastFrame()).toContain(
|
||||
'Trusting a folder allows Qwen Code to execute commands it suggests.',
|
||||
'Trusting a folder allows Gemini to execute commands it suggests.',
|
||||
);
|
||||
});
|
||||
|
||||
@@ -66,7 +66,7 @@ describe('FolderTrustDialog', () => {
|
||||
);
|
||||
|
||||
expect(lastFrame()).toContain(
|
||||
'To see changes, Qwen Code must be restarted',
|
||||
'To see changes, Gemini CLI must be restarted',
|
||||
);
|
||||
});
|
||||
|
||||
|
||||
@@ -73,7 +73,7 @@ export const FolderTrustDialog: React.FC<FolderTrustDialogProps> = ({
|
||||
<Box flexDirection="column" marginBottom={1}>
|
||||
<Text bold>Do you trust this folder?</Text>
|
||||
<Text>
|
||||
Trusting a folder allows Qwen Code to execute commands it suggests.
|
||||
Trusting a folder allows Gemini to execute commands it suggests.
|
||||
This is a security feature to prevent accidental execution in
|
||||
untrusted directories.
|
||||
</Text>
|
||||
@@ -88,7 +88,7 @@ export const FolderTrustDialog: React.FC<FolderTrustDialogProps> = ({
|
||||
{isRestarting && (
|
||||
<Box marginLeft={1} marginTop={1}>
|
||||
<Text color={Colors.AccentYellow}>
|
||||
To see changes, Qwen Code must be restarted. Press r to exit and
|
||||
To see changes, Gemini CLI must be restarted. Press r to exit and
|
||||
apply changes now.
|
||||
</Text>
|
||||
</Box>
|
||||
|
||||
@@ -5,7 +5,6 @@
|
||||
*/
|
||||
|
||||
import type React from 'react';
|
||||
import { memo } from 'react';
|
||||
import type { HistoryItem } from '../types.js';
|
||||
import { UserMessage } from './messages/UserMessage.js';
|
||||
import { UserShellMessage } from './messages/UserShellMessage.js';
|
||||
@@ -36,7 +35,7 @@ interface HistoryItemDisplayProps {
|
||||
commands?: readonly SlashCommand[];
|
||||
}
|
||||
|
||||
const HistoryItemDisplayComponent: React.FC<HistoryItemDisplayProps> = ({
|
||||
export const HistoryItemDisplay: React.FC<HistoryItemDisplayProps> = ({
|
||||
item,
|
||||
availableTerminalHeight,
|
||||
terminalWidth,
|
||||
@@ -102,7 +101,3 @@ const HistoryItemDisplayComponent: React.FC<HistoryItemDisplayProps> = ({
|
||||
{item.type === 'summary' && <SummaryMessage summary={item.summary} />}
|
||||
</Box>
|
||||
);
|
||||
|
||||
HistoryItemDisplayComponent.displayName = 'HistoryItemDisplay';
|
||||
|
||||
export const HistoryItemDisplay = memo(HistoryItemDisplayComponent);
|
||||
|
||||
@@ -1,246 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import React from 'react';
|
||||
import { render } from 'ink-testing-library';
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import { ModelSelectionDialog } from './ModelSelectionDialog.js';
|
||||
import type { AvailableModel } from '../models/availableModels.js';
|
||||
import type { RadioSelectItem } from './shared/RadioButtonSelect.js';
|
||||
|
||||
// Mock the useKeypress hook
|
||||
const mockUseKeypress = vi.hoisted(() => vi.fn());
|
||||
vi.mock('../hooks/useKeypress.js', () => ({
|
||||
useKeypress: mockUseKeypress,
|
||||
}));
|
||||
|
||||
// Mock the RadioButtonSelect component
|
||||
const mockRadioButtonSelect = vi.hoisted(() => vi.fn());
|
||||
vi.mock('./shared/RadioButtonSelect.js', () => ({
|
||||
RadioButtonSelect: mockRadioButtonSelect,
|
||||
}));
|
||||
|
||||
describe('ModelSelectionDialog', () => {
|
||||
const mockAvailableModels: AvailableModel[] = [
|
||||
{ id: 'qwen3-coder-plus', label: 'qwen3-coder-plus' },
|
||||
{ id: 'qwen-vl-max-latest', label: 'qwen-vl-max', isVision: true },
|
||||
{ id: 'gpt-4', label: 'GPT-4' },
|
||||
];
|
||||
|
||||
const mockOnSelect = vi.fn();
|
||||
const mockOnCancel = vi.fn();
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
|
||||
// Mock RadioButtonSelect to return a simple div
|
||||
mockRadioButtonSelect.mockReturnValue(
|
||||
React.createElement('div', { 'data-testid': 'radio-select' }),
|
||||
);
|
||||
});
|
||||
|
||||
it('should setup escape key handler to call onCancel', () => {
|
||||
render(
|
||||
<ModelSelectionDialog
|
||||
availableModels={mockAvailableModels}
|
||||
currentModel="qwen3-coder-plus"
|
||||
onSelect={mockOnSelect}
|
||||
onCancel={mockOnCancel}
|
||||
/>,
|
||||
);
|
||||
|
||||
expect(mockUseKeypress).toHaveBeenCalledWith(expect.any(Function), {
|
||||
isActive: true,
|
||||
});
|
||||
|
||||
// Simulate escape key press
|
||||
const keypressHandler = mockUseKeypress.mock.calls[0][0];
|
||||
keypressHandler({ name: 'escape' });
|
||||
|
||||
expect(mockOnCancel).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should not call onCancel for non-escape keys', () => {
|
||||
render(
|
||||
<ModelSelectionDialog
|
||||
availableModels={mockAvailableModels}
|
||||
currentModel="qwen3-coder-plus"
|
||||
onSelect={mockOnSelect}
|
||||
onCancel={mockOnCancel}
|
||||
/>,
|
||||
);
|
||||
|
||||
const keypressHandler = mockUseKeypress.mock.calls[0][0];
|
||||
keypressHandler({ name: 'enter' });
|
||||
|
||||
expect(mockOnCancel).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should set correct initial index for current model', () => {
|
||||
render(
|
||||
<ModelSelectionDialog
|
||||
availableModels={mockAvailableModels}
|
||||
currentModel="qwen-vl-max-latest"
|
||||
onSelect={mockOnSelect}
|
||||
onCancel={mockOnCancel}
|
||||
/>,
|
||||
);
|
||||
|
||||
const callArgs = mockRadioButtonSelect.mock.calls[0][0];
|
||||
expect(callArgs.initialIndex).toBe(1); // qwen-vl-max-latest is at index 1
|
||||
});
|
||||
|
||||
it('should set initial index to 0 when current model is not found', () => {
|
||||
render(
|
||||
<ModelSelectionDialog
|
||||
availableModels={mockAvailableModels}
|
||||
currentModel="non-existent-model"
|
||||
onSelect={mockOnSelect}
|
||||
onCancel={mockOnCancel}
|
||||
/>,
|
||||
);
|
||||
|
||||
const callArgs = mockRadioButtonSelect.mock.calls[0][0];
|
||||
expect(callArgs.initialIndex).toBe(0);
|
||||
});
|
||||
|
||||
it('should call onSelect when a model is selected', () => {
|
||||
render(
|
||||
<ModelSelectionDialog
|
||||
availableModels={mockAvailableModels}
|
||||
currentModel="qwen3-coder-plus"
|
||||
onSelect={mockOnSelect}
|
||||
onCancel={mockOnCancel}
|
||||
/>,
|
||||
);
|
||||
|
||||
const callArgs = mockRadioButtonSelect.mock.calls[0][0];
|
||||
expect(typeof callArgs.onSelect).toBe('function');
|
||||
|
||||
// Simulate selection
|
||||
const onSelectCallback = mockRadioButtonSelect.mock.calls[0][0].onSelect;
|
||||
onSelectCallback('qwen-vl-max-latest');
|
||||
|
||||
expect(mockOnSelect).toHaveBeenCalledWith('qwen-vl-max-latest');
|
||||
});
|
||||
|
||||
it('should handle empty models array', () => {
|
||||
render(
|
||||
<ModelSelectionDialog
|
||||
availableModels={[]}
|
||||
currentModel=""
|
||||
onSelect={mockOnSelect}
|
||||
onCancel={mockOnCancel}
|
||||
/>,
|
||||
);
|
||||
|
||||
const callArgs = mockRadioButtonSelect.mock.calls[0][0];
|
||||
expect(callArgs.items).toEqual([]);
|
||||
expect(callArgs.initialIndex).toBe(0);
|
||||
});
|
||||
|
||||
it('should create correct option items with proper labels', () => {
|
||||
render(
|
||||
<ModelSelectionDialog
|
||||
availableModels={mockAvailableModels}
|
||||
currentModel="qwen3-coder-plus"
|
||||
onSelect={mockOnSelect}
|
||||
onCancel={mockOnCancel}
|
||||
/>,
|
||||
);
|
||||
|
||||
const expectedItems = [
|
||||
{
|
||||
label: 'qwen3-coder-plus (current)',
|
||||
value: 'qwen3-coder-plus',
|
||||
},
|
||||
{
|
||||
label: 'qwen-vl-max [Vision]',
|
||||
value: 'qwen-vl-max-latest',
|
||||
},
|
||||
{
|
||||
label: 'GPT-4',
|
||||
value: 'gpt-4',
|
||||
},
|
||||
];
|
||||
|
||||
const callArgs = mockRadioButtonSelect.mock.calls[0][0];
|
||||
expect(callArgs.items).toEqual(expectedItems);
|
||||
});
|
||||
|
||||
it('should show vision indicator for vision models', () => {
|
||||
render(
|
||||
<ModelSelectionDialog
|
||||
availableModels={mockAvailableModels}
|
||||
currentModel="gpt-4"
|
||||
onSelect={mockOnSelect}
|
||||
onCancel={mockOnCancel}
|
||||
/>,
|
||||
);
|
||||
|
||||
const callArgs = mockRadioButtonSelect.mock.calls[0][0];
|
||||
const visionModelItem = callArgs.items.find(
|
||||
(item: RadioSelectItem<string>) => item.value === 'qwen-vl-max-latest',
|
||||
);
|
||||
|
||||
expect(visionModelItem?.label).toContain('[Vision]');
|
||||
});
|
||||
|
||||
it('should show current indicator for the current model', () => {
|
||||
render(
|
||||
<ModelSelectionDialog
|
||||
availableModels={mockAvailableModels}
|
||||
currentModel="qwen-vl-max-latest"
|
||||
onSelect={mockOnSelect}
|
||||
onCancel={mockOnCancel}
|
||||
/>,
|
||||
);
|
||||
|
||||
const callArgs = mockRadioButtonSelect.mock.calls[0][0];
|
||||
const currentModelItem = callArgs.items.find(
|
||||
(item: RadioSelectItem<string>) => item.value === 'qwen-vl-max-latest',
|
||||
);
|
||||
|
||||
expect(currentModelItem?.label).toContain('(current)');
|
||||
});
|
||||
|
||||
it('should pass isFocused prop to RadioButtonSelect', () => {
|
||||
render(
|
||||
<ModelSelectionDialog
|
||||
availableModels={mockAvailableModels}
|
||||
currentModel="qwen3-coder-plus"
|
||||
onSelect={mockOnSelect}
|
||||
onCancel={mockOnCancel}
|
||||
/>,
|
||||
);
|
||||
|
||||
const callArgs = mockRadioButtonSelect.mock.calls[0][0];
|
||||
expect(callArgs.isFocused).toBe(true);
|
||||
});
|
||||
|
||||
it('should handle multiple onSelect calls correctly', () => {
|
||||
render(
|
||||
<ModelSelectionDialog
|
||||
availableModels={mockAvailableModels}
|
||||
currentModel="qwen3-coder-plus"
|
||||
onSelect={mockOnSelect}
|
||||
onCancel={mockOnCancel}
|
||||
/>,
|
||||
);
|
||||
|
||||
const onSelectCallback = mockRadioButtonSelect.mock.calls[0][0].onSelect;
|
||||
|
||||
// Call multiple times
|
||||
onSelectCallback('qwen3-coder-plus');
|
||||
onSelectCallback('qwen-vl-max-latest');
|
||||
onSelectCallback('gpt-4');
|
||||
|
||||
expect(mockOnSelect).toHaveBeenCalledTimes(3);
|
||||
expect(mockOnSelect).toHaveBeenNthCalledWith(1, 'qwen3-coder-plus');
|
||||
expect(mockOnSelect).toHaveBeenNthCalledWith(2, 'qwen-vl-max-latest');
|
||||
expect(mockOnSelect).toHaveBeenNthCalledWith(3, 'gpt-4');
|
||||
});
|
||||
});
|
||||
@@ -1,87 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import type React from 'react';
|
||||
import { Box, Text } from 'ink';
|
||||
import { Colors } from '../colors.js';
|
||||
import {
|
||||
RadioButtonSelect,
|
||||
type RadioSelectItem,
|
||||
} from './shared/RadioButtonSelect.js';
|
||||
import { useKeypress } from '../hooks/useKeypress.js';
|
||||
import type { AvailableModel } from '../models/availableModels.js';
|
||||
|
||||
export interface ModelSelectionDialogProps {
|
||||
availableModels: AvailableModel[];
|
||||
currentModel: string;
|
||||
onSelect: (modelId: string) => void;
|
||||
onCancel: () => void;
|
||||
}
|
||||
|
||||
export const ModelSelectionDialog: React.FC<ModelSelectionDialogProps> = ({
|
||||
availableModels,
|
||||
currentModel,
|
||||
onSelect,
|
||||
onCancel,
|
||||
}) => {
|
||||
useKeypress(
|
||||
(key) => {
|
||||
if (key.name === 'escape') {
|
||||
onCancel();
|
||||
}
|
||||
},
|
||||
{ isActive: true },
|
||||
);
|
||||
|
||||
const options: Array<RadioSelectItem<string>> = availableModels.map(
|
||||
(model) => {
|
||||
const visionIndicator = model.isVision ? ' [Vision]' : '';
|
||||
const currentIndicator = model.id === currentModel ? ' (current)' : '';
|
||||
return {
|
||||
label: `${model.label}${visionIndicator}${currentIndicator}`,
|
||||
value: model.id,
|
||||
};
|
||||
},
|
||||
);
|
||||
|
||||
const initialIndex = Math.max(
|
||||
0,
|
||||
availableModels.findIndex((model) => model.id === currentModel),
|
||||
);
|
||||
|
||||
const handleSelect = (modelId: string) => {
|
||||
onSelect(modelId);
|
||||
};
|
||||
|
||||
return (
|
||||
<Box
|
||||
flexDirection="column"
|
||||
borderStyle="round"
|
||||
borderColor={Colors.AccentBlue}
|
||||
padding={1}
|
||||
width="100%"
|
||||
marginLeft={1}
|
||||
>
|
||||
<Box flexDirection="column" marginBottom={1}>
|
||||
<Text bold>Select Model</Text>
|
||||
<Text>Choose a model for this session:</Text>
|
||||
</Box>
|
||||
|
||||
<Box marginBottom={1}>
|
||||
<RadioButtonSelect
|
||||
items={options}
|
||||
initialIndex={initialIndex}
|
||||
onSelect={handleSelect}
|
||||
isFocused
|
||||
/>
|
||||
</Box>
|
||||
|
||||
<Box>
|
||||
<Text color={Colors.Gray}>Press Enter to select, Esc to cancel</Text>
|
||||
</Box>
|
||||
</Box>
|
||||
);
|
||||
};
|
||||
@@ -1,185 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import React from 'react';
|
||||
import { render } from 'ink-testing-library';
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import { ModelSwitchDialog, VisionSwitchOutcome } from './ModelSwitchDialog.js';
|
||||
|
||||
// Mock the useKeypress hook
|
||||
const mockUseKeypress = vi.hoisted(() => vi.fn());
|
||||
vi.mock('../hooks/useKeypress.js', () => ({
|
||||
useKeypress: mockUseKeypress,
|
||||
}));
|
||||
|
||||
// Mock the RadioButtonSelect component
|
||||
const mockRadioButtonSelect = vi.hoisted(() => vi.fn());
|
||||
vi.mock('./shared/RadioButtonSelect.js', () => ({
|
||||
RadioButtonSelect: mockRadioButtonSelect,
|
||||
}));
|
||||
|
||||
describe('ModelSwitchDialog', () => {
|
||||
const mockOnSelect = vi.fn();
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
|
||||
// Mock RadioButtonSelect to return a simple div
|
||||
mockRadioButtonSelect.mockReturnValue(
|
||||
React.createElement('div', { 'data-testid': 'radio-select' }),
|
||||
);
|
||||
});
|
||||
|
||||
it('should setup RadioButtonSelect with correct options', () => {
|
||||
render(<ModelSwitchDialog onSelect={mockOnSelect} />);
|
||||
|
||||
const expectedItems = [
|
||||
{
|
||||
label: 'Switch for this request only',
|
||||
value: VisionSwitchOutcome.SwitchOnce,
|
||||
},
|
||||
{
|
||||
label: 'Switch session to vision model',
|
||||
value: VisionSwitchOutcome.SwitchSessionToVL,
|
||||
},
|
||||
{
|
||||
label: 'Do not switch, show guidance',
|
||||
value: VisionSwitchOutcome.DisallowWithGuidance,
|
||||
},
|
||||
];
|
||||
|
||||
const callArgs = mockRadioButtonSelect.mock.calls[0][0];
|
||||
expect(callArgs.items).toEqual(expectedItems);
|
||||
expect(callArgs.initialIndex).toBe(0);
|
||||
expect(callArgs.isFocused).toBe(true);
|
||||
});
|
||||
|
||||
it('should call onSelect when an option is selected', () => {
|
||||
render(<ModelSwitchDialog onSelect={mockOnSelect} />);
|
||||
|
||||
const callArgs = mockRadioButtonSelect.mock.calls[0][0];
|
||||
expect(typeof callArgs.onSelect).toBe('function');
|
||||
|
||||
// Simulate selection of "Switch for this request only"
|
||||
const onSelectCallback = mockRadioButtonSelect.mock.calls[0][0].onSelect;
|
||||
onSelectCallback(VisionSwitchOutcome.SwitchOnce);
|
||||
|
||||
expect(mockOnSelect).toHaveBeenCalledWith(VisionSwitchOutcome.SwitchOnce);
|
||||
});
|
||||
|
||||
it('should call onSelect with SwitchSessionToVL when second option is selected', () => {
|
||||
render(<ModelSwitchDialog onSelect={mockOnSelect} />);
|
||||
|
||||
const onSelectCallback = mockRadioButtonSelect.mock.calls[0][0].onSelect;
|
||||
onSelectCallback(VisionSwitchOutcome.SwitchSessionToVL);
|
||||
|
||||
expect(mockOnSelect).toHaveBeenCalledWith(
|
||||
VisionSwitchOutcome.SwitchSessionToVL,
|
||||
);
|
||||
});
|
||||
|
||||
it('should call onSelect with DisallowWithGuidance when third option is selected', () => {
|
||||
render(<ModelSwitchDialog onSelect={mockOnSelect} />);
|
||||
|
||||
const onSelectCallback = mockRadioButtonSelect.mock.calls[0][0].onSelect;
|
||||
onSelectCallback(VisionSwitchOutcome.DisallowWithGuidance);
|
||||
|
||||
expect(mockOnSelect).toHaveBeenCalledWith(
|
||||
VisionSwitchOutcome.DisallowWithGuidance,
|
||||
);
|
||||
});
|
||||
|
||||
it('should setup escape key handler to call onSelect with DisallowWithGuidance', () => {
|
||||
render(<ModelSwitchDialog onSelect={mockOnSelect} />);
|
||||
|
||||
expect(mockUseKeypress).toHaveBeenCalledWith(expect.any(Function), {
|
||||
isActive: true,
|
||||
});
|
||||
|
||||
// Simulate escape key press
|
||||
const keypressHandler = mockUseKeypress.mock.calls[0][0];
|
||||
keypressHandler({ name: 'escape' });
|
||||
|
||||
expect(mockOnSelect).toHaveBeenCalledWith(
|
||||
VisionSwitchOutcome.DisallowWithGuidance,
|
||||
);
|
||||
});
|
||||
|
||||
it('should not call onSelect for non-escape keys', () => {
|
||||
render(<ModelSwitchDialog onSelect={mockOnSelect} />);
|
||||
|
||||
const keypressHandler = mockUseKeypress.mock.calls[0][0];
|
||||
keypressHandler({ name: 'enter' });
|
||||
|
||||
expect(mockOnSelect).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should set initial index to 0 (first option)', () => {
|
||||
render(<ModelSwitchDialog onSelect={mockOnSelect} />);
|
||||
|
||||
const callArgs = mockRadioButtonSelect.mock.calls[0][0];
|
||||
expect(callArgs.initialIndex).toBe(0);
|
||||
});
|
||||
|
||||
describe('VisionSwitchOutcome enum', () => {
|
||||
it('should have correct enum values', () => {
|
||||
expect(VisionSwitchOutcome.SwitchOnce).toBe('switch_once');
|
||||
expect(VisionSwitchOutcome.SwitchSessionToVL).toBe(
|
||||
'switch_session_to_vl',
|
||||
);
|
||||
expect(VisionSwitchOutcome.DisallowWithGuidance).toBe(
|
||||
'disallow_with_guidance',
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle multiple onSelect calls correctly', () => {
|
||||
render(<ModelSwitchDialog onSelect={mockOnSelect} />);
|
||||
|
||||
const onSelectCallback = mockRadioButtonSelect.mock.calls[0][0].onSelect;
|
||||
|
||||
// Call multiple times
|
||||
onSelectCallback(VisionSwitchOutcome.SwitchOnce);
|
||||
onSelectCallback(VisionSwitchOutcome.SwitchSessionToVL);
|
||||
onSelectCallback(VisionSwitchOutcome.DisallowWithGuidance);
|
||||
|
||||
expect(mockOnSelect).toHaveBeenCalledTimes(3);
|
||||
expect(mockOnSelect).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
VisionSwitchOutcome.SwitchOnce,
|
||||
);
|
||||
expect(mockOnSelect).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
VisionSwitchOutcome.SwitchSessionToVL,
|
||||
);
|
||||
expect(mockOnSelect).toHaveBeenNthCalledWith(
|
||||
3,
|
||||
VisionSwitchOutcome.DisallowWithGuidance,
|
||||
);
|
||||
});
|
||||
|
||||
it('should pass isFocused prop to RadioButtonSelect', () => {
|
||||
render(<ModelSwitchDialog onSelect={mockOnSelect} />);
|
||||
|
||||
const callArgs = mockRadioButtonSelect.mock.calls[0][0];
|
||||
expect(callArgs.isFocused).toBe(true);
|
||||
});
|
||||
|
||||
it('should handle escape key multiple times', () => {
|
||||
render(<ModelSwitchDialog onSelect={mockOnSelect} />);
|
||||
|
||||
const keypressHandler = mockUseKeypress.mock.calls[0][0];
|
||||
|
||||
// Call escape multiple times
|
||||
keypressHandler({ name: 'escape' });
|
||||
keypressHandler({ name: 'escape' });
|
||||
|
||||
expect(mockOnSelect).toHaveBeenCalledTimes(2);
|
||||
expect(mockOnSelect).toHaveBeenCalledWith(
|
||||
VisionSwitchOutcome.DisallowWithGuidance,
|
||||
);
|
||||
});
|
||||
});
|
||||
@@ -1,89 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import type React from 'react';
|
||||
import { Box, Text } from 'ink';
|
||||
import { Colors } from '../colors.js';
|
||||
import {
|
||||
RadioButtonSelect,
|
||||
type RadioSelectItem,
|
||||
} from './shared/RadioButtonSelect.js';
|
||||
import { useKeypress } from '../hooks/useKeypress.js';
|
||||
|
||||
export enum VisionSwitchOutcome {
|
||||
SwitchOnce = 'switch_once',
|
||||
SwitchSessionToVL = 'switch_session_to_vl',
|
||||
DisallowWithGuidance = 'disallow_with_guidance',
|
||||
}
|
||||
|
||||
export interface ModelSwitchDialogProps {
|
||||
onSelect: (outcome: VisionSwitchOutcome) => void;
|
||||
}
|
||||
|
||||
export const ModelSwitchDialog: React.FC<ModelSwitchDialogProps> = ({
|
||||
onSelect,
|
||||
}) => {
|
||||
useKeypress(
|
||||
(key) => {
|
||||
if (key.name === 'escape') {
|
||||
onSelect(VisionSwitchOutcome.DisallowWithGuidance);
|
||||
}
|
||||
},
|
||||
{ isActive: true },
|
||||
);
|
||||
|
||||
const options: Array<RadioSelectItem<VisionSwitchOutcome>> = [
|
||||
{
|
||||
label: 'Switch for this request only',
|
||||
value: VisionSwitchOutcome.SwitchOnce,
|
||||
},
|
||||
{
|
||||
label: 'Switch session to vision model',
|
||||
value: VisionSwitchOutcome.SwitchSessionToVL,
|
||||
},
|
||||
{
|
||||
label: 'Do not switch, show guidance',
|
||||
value: VisionSwitchOutcome.DisallowWithGuidance,
|
||||
},
|
||||
];
|
||||
|
||||
const handleSelect = (outcome: VisionSwitchOutcome) => {
|
||||
onSelect(outcome);
|
||||
};
|
||||
|
||||
return (
|
||||
<Box
|
||||
flexDirection="column"
|
||||
borderStyle="round"
|
||||
borderColor={Colors.AccentYellow}
|
||||
padding={1}
|
||||
width="100%"
|
||||
marginLeft={1}
|
||||
>
|
||||
<Box flexDirection="column" marginBottom={1}>
|
||||
<Text bold>Vision Model Switch Required</Text>
|
||||
<Text>
|
||||
Your message contains an image, but the current model doesn't
|
||||
support vision.
|
||||
</Text>
|
||||
<Text>How would you like to proceed?</Text>
|
||||
</Box>
|
||||
|
||||
<Box marginBottom={1}>
|
||||
<RadioButtonSelect
|
||||
items={options}
|
||||
initialIndex={0}
|
||||
onSelect={handleSelect}
|
||||
isFocused
|
||||
/>
|
||||
</Box>
|
||||
|
||||
<Box>
|
||||
<Text color={Colors.Gray}>Press Enter to select, Esc to cancel</Text>
|
||||
</Box>
|
||||
</Box>
|
||||
);
|
||||
};
|
||||
@@ -678,7 +678,7 @@ describe('SettingsDialog', () => {
|
||||
|
||||
// Should not show restart prompt initially
|
||||
expect(lastFrame()).not.toContain(
|
||||
'To see changes, Qwen Code must be restarted',
|
||||
'To see changes, Gemini CLI must be restarted',
|
||||
);
|
||||
|
||||
unmount();
|
||||
|
||||
@@ -780,7 +780,7 @@ export function SettingsDialog({
|
||||
</Text>
|
||||
{showRestartPrompt && (
|
||||
<Text color={Colors.AccentYellow}>
|
||||
To see changes, Qwen Code must be restarted. Press r to exit and
|
||||
To see changes, Gemini CLI must be restarted. Press r to exit and
|
||||
apply changes now.
|
||||
</Text>
|
||||
)}
|
||||
|
||||
@@ -47,7 +47,7 @@ export function WorkspaceMigrationDialog(props: {
|
||||
<>
|
||||
<Text>
|
||||
The following extensions failed to migrate. Please try installing
|
||||
them manually. To see other changes, Qwen Code must be restarted.
|
||||
them manually. To see other changes, Gemini CLI must be restarted.
|
||||
Press {"'q'"} to quit.
|
||||
</Text>
|
||||
<Box flexDirection="column" marginTop={1} marginLeft={2}>
|
||||
@@ -58,7 +58,7 @@ export function WorkspaceMigrationDialog(props: {
|
||||
</>
|
||||
) : (
|
||||
<Text>
|
||||
Migration complete. To see changes, Qwen Code must be restarted.
|
||||
Migration complete. To see changes, Gemini CLI must be restarted.
|
||||
Press {"'q'"} to quit.
|
||||
</Text>
|
||||
)}
|
||||
|
||||
@@ -27,7 +27,6 @@ export interface ToolConfirmationMessageProps {
|
||||
isFocused?: boolean;
|
||||
availableTerminalHeight?: number;
|
||||
terminalWidth: number;
|
||||
compactMode?: boolean;
|
||||
}
|
||||
|
||||
export const ToolConfirmationMessage: React.FC<
|
||||
@@ -38,7 +37,6 @@ export const ToolConfirmationMessage: React.FC<
|
||||
isFocused = true,
|
||||
availableTerminalHeight,
|
||||
terminalWidth,
|
||||
compactMode = false,
|
||||
}) => {
|
||||
const { onConfirm } = confirmationDetails;
|
||||
const childWidth = terminalWidth - 2; // 2 for padding
|
||||
@@ -72,40 +70,6 @@ export const ToolConfirmationMessage: React.FC<
|
||||
|
||||
const handleSelect = (item: ToolConfirmationOutcome) => handleConfirm(item);
|
||||
|
||||
// Compact mode: return simple 3-option display
|
||||
if (compactMode) {
|
||||
const compactOptions: Array<RadioSelectItem<ToolConfirmationOutcome>> = [
|
||||
{
|
||||
label: 'Yes, allow once',
|
||||
value: ToolConfirmationOutcome.ProceedOnce,
|
||||
},
|
||||
{
|
||||
label: 'Allow always',
|
||||
value: ToolConfirmationOutcome.ProceedAlways,
|
||||
},
|
||||
{
|
||||
label: 'No',
|
||||
value: ToolConfirmationOutcome.Cancel,
|
||||
},
|
||||
];
|
||||
|
||||
return (
|
||||
<Box flexDirection="column">
|
||||
<Box>
|
||||
<Text wrap="truncate">Do you want to proceed?</Text>
|
||||
</Box>
|
||||
<Box>
|
||||
<RadioButtonSelect
|
||||
items={compactOptions}
|
||||
onSelect={handleSelect}
|
||||
isFocused={isFocused}
|
||||
/>
|
||||
</Box>
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
|
||||
// Original logic continues unchanged below
|
||||
let bodyContent: React.ReactNode | null = null; // Removed contextDisplay here
|
||||
let question: string;
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
*/
|
||||
|
||||
import { useReducer, useCallback, useMemo } from 'react';
|
||||
import { Box, Text } from 'ink';
|
||||
import { Box, Text, useInput } from 'ink';
|
||||
import { wizardReducer, initialWizardState } from '../reducers.js';
|
||||
import { LocationSelector } from './LocationSelector.js';
|
||||
import { GenerationMethodSelector } from './GenerationMethodSelector.js';
|
||||
@@ -20,7 +20,6 @@ import type { Config } from '@qwen-code/qwen-code-core';
|
||||
import { Colors } from '../../../colors.js';
|
||||
import { theme } from '../../../semantic-colors.js';
|
||||
import { TextEntryStep } from './TextEntryStep.js';
|
||||
import { useKeypress } from '../../../hooks/useKeypress.js';
|
||||
|
||||
interface AgentCreationWizardProps {
|
||||
onClose: () => void;
|
||||
@@ -50,12 +49,8 @@ export function AgentCreationWizard({
|
||||
}, [onClose]);
|
||||
|
||||
// Centralized ESC key handling for the entire wizard
|
||||
useKeypress(
|
||||
(key) => {
|
||||
if (key.name !== 'escape') {
|
||||
return;
|
||||
}
|
||||
|
||||
useInput((input, key) => {
|
||||
if (key.escape) {
|
||||
// LLM DescriptionInput handles its own ESC logic when generating
|
||||
const kind = getStepKind(state.generationMethod, state.currentStep);
|
||||
if (kind === 'LLM_DESC' && state.isGenerating) {
|
||||
@@ -69,9 +64,8 @@ export function AgentCreationWizard({
|
||||
// On other steps, ESC goes back to previous step
|
||||
handlePrevious();
|
||||
}
|
||||
},
|
||||
{ isActive: true },
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
const stepProps: WizardStepProps = useMemo(
|
||||
() => ({
|
||||
|
||||
@@ -227,7 +227,7 @@ export const AgentSelectionStep = ({
|
||||
const textColor = isSelected ? theme.text.accent : theme.text.primary;
|
||||
|
||||
return (
|
||||
<Box key={`${agent.name}-${agent.level}`} alignItems="center">
|
||||
<Box key={agent.name} alignItems="center">
|
||||
<Box minWidth={2} flexShrink={0}>
|
||||
<Text color={isSelected ? theme.text.accent : theme.text.primary}>
|
||||
{isSelected ? '●' : ' '}
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
*/
|
||||
|
||||
import { useState, useCallback, useMemo, useEffect } from 'react';
|
||||
import { Box, Text } from 'ink';
|
||||
import { Box, Text, useInput } from 'ink';
|
||||
import { AgentSelectionStep } from './AgentSelectionStep.js';
|
||||
import { ActionSelectionStep } from './ActionSelectionStep.js';
|
||||
import { AgentViewerStep } from './AgentViewerStep.js';
|
||||
@@ -17,8 +17,7 @@ import { MANAGEMENT_STEPS } from '../types.js';
|
||||
import { Colors } from '../../../colors.js';
|
||||
import { theme } from '../../../semantic-colors.js';
|
||||
import { getColorForDisplay, shouldShowColor } from '../utils.js';
|
||||
import type { SubagentConfig, Config } from '@qwen-code/qwen-code-core';
|
||||
import { useKeypress } from '../../../hooks/useKeypress.js';
|
||||
import type { Config, SubagentConfig } from '@qwen-code/qwen-code-core';
|
||||
|
||||
interface AgentsManagerDialogProps {
|
||||
onClose: () => void;
|
||||
@@ -53,7 +52,18 @@ export function AgentsManagerDialog({
|
||||
const manager = config.getSubagentManager();
|
||||
|
||||
// Load agents from all levels separately to show all agents including conflicts
|
||||
const allAgents = await manager.listSubagents();
|
||||
const [projectAgents, userAgents, builtinAgents] = await Promise.all([
|
||||
manager.listSubagents({ level: 'project' }),
|
||||
manager.listSubagents({ level: 'user' }),
|
||||
manager.listSubagents({ level: 'builtin' }),
|
||||
]);
|
||||
|
||||
// Combine all agents (project, user, and builtin level)
|
||||
const allAgents = [
|
||||
...(projectAgents || []),
|
||||
...(userAgents || []),
|
||||
...(builtinAgents || []),
|
||||
];
|
||||
|
||||
setAvailableAgents(allAgents);
|
||||
}, [config]);
|
||||
@@ -112,12 +122,8 @@ export function AgentsManagerDialog({
|
||||
);
|
||||
|
||||
// Centralized ESC key handling for the entire dialog
|
||||
useKeypress(
|
||||
(key) => {
|
||||
if (key.name !== 'escape') {
|
||||
return;
|
||||
}
|
||||
|
||||
useInput((input, key) => {
|
||||
if (key.escape) {
|
||||
const currentStep = getCurrentStep();
|
||||
if (currentStep === MANAGEMENT_STEPS.AGENT_SELECTION) {
|
||||
// On first step, ESC cancels the entire dialog
|
||||
@@ -126,9 +132,8 @@ export function AgentsManagerDialog({
|
||||
// On other steps, ESC goes back to previous step in navigation stack
|
||||
handleNavigateBack();
|
||||
}
|
||||
},
|
||||
{ isActive: true },
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
// Props for child components - now using direct state and callbacks
|
||||
const commonProps = useMemo(
|
||||
|
||||
@@ -18,12 +18,12 @@ import { COLOR_OPTIONS } from '../constants.js';
|
||||
import { fmtDuration } from '../utils.js';
|
||||
import { ToolConfirmationMessage } from '../../messages/ToolConfirmationMessage.js';
|
||||
|
||||
export type DisplayMode = 'compact' | 'default' | 'verbose';
|
||||
export type DisplayMode = 'default' | 'verbose';
|
||||
|
||||
export interface AgentExecutionDisplayProps {
|
||||
data: TaskResultDisplay;
|
||||
availableHeight?: number;
|
||||
childWidth: number;
|
||||
childWidth?: number;
|
||||
config: Config;
|
||||
}
|
||||
|
||||
@@ -80,7 +80,7 @@ export const AgentExecutionDisplay: React.FC<AgentExecutionDisplayProps> = ({
|
||||
childWidth,
|
||||
config,
|
||||
}) => {
|
||||
const [displayMode, setDisplayMode] = React.useState<DisplayMode>('compact');
|
||||
const [displayMode, setDisplayMode] = React.useState<DisplayMode>('default');
|
||||
|
||||
const agentColor = useMemo(() => {
|
||||
const colorOption = COLOR_OPTIONS.find(
|
||||
@@ -93,6 +93,8 @@ export const AgentExecutionDisplay: React.FC<AgentExecutionDisplayProps> = ({
|
||||
// This component only listens to keyboard shortcut events when the subagent is running
|
||||
if (data.status !== 'running') return '';
|
||||
|
||||
if (displayMode === 'verbose') return 'Press ctrl+r to show less.';
|
||||
|
||||
if (displayMode === 'default') {
|
||||
const hasMoreLines =
|
||||
data.taskPrompt.split('\n').length > MAX_TASK_PROMPT_LINES;
|
||||
@@ -100,28 +102,17 @@ export const AgentExecutionDisplay: React.FC<AgentExecutionDisplayProps> = ({
|
||||
data.toolCalls && data.toolCalls.length > MAX_TOOL_CALLS;
|
||||
|
||||
if (hasMoreToolCalls || hasMoreLines) {
|
||||
return 'Press ctrl+r to show less, ctrl+e to show more.';
|
||||
return 'Press ctrl+r to show more.';
|
||||
}
|
||||
return 'Press ctrl+r to show less.';
|
||||
return '';
|
||||
}
|
||||
|
||||
if (displayMode === 'verbose') {
|
||||
return 'Press ctrl+e to show less.';
|
||||
}
|
||||
|
||||
return '';
|
||||
}, [displayMode, data]);
|
||||
}, [displayMode, data.toolCalls, data.taskPrompt, data.status]);
|
||||
|
||||
// Handle keyboard shortcuts to control display mode
|
||||
// Handle ctrl+r keypresses to control display mode
|
||||
useKeypress(
|
||||
(key) => {
|
||||
if (key.ctrl && key.name === 'r') {
|
||||
// ctrl+r toggles between compact and default
|
||||
setDisplayMode((current) =>
|
||||
current === 'compact' ? 'default' : 'compact',
|
||||
);
|
||||
} else if (key.ctrl && key.name === 'e') {
|
||||
// ctrl+e toggles between default and verbose
|
||||
setDisplayMode((current) =>
|
||||
current === 'default' ? 'verbose' : 'default',
|
||||
);
|
||||
@@ -130,82 +121,6 @@ export const AgentExecutionDisplay: React.FC<AgentExecutionDisplayProps> = ({
|
||||
{ isActive: true },
|
||||
);
|
||||
|
||||
if (displayMode === 'compact') {
|
||||
return (
|
||||
<Box flexDirection="column">
|
||||
{/* Header: Agent name and status */}
|
||||
{!data.pendingConfirmation && (
|
||||
<Box flexDirection="row">
|
||||
<Text bold color={agentColor}>
|
||||
{data.subagentName}
|
||||
</Text>
|
||||
<StatusDot status={data.status} />
|
||||
<StatusIndicator status={data.status} />
|
||||
</Box>
|
||||
)}
|
||||
|
||||
{/* Running state: Show current tool call and progress */}
|
||||
{data.status === 'running' && (
|
||||
<>
|
||||
{/* Current tool call */}
|
||||
{data.toolCalls && data.toolCalls.length > 0 && (
|
||||
<Box flexDirection="column">
|
||||
<ToolCallItem
|
||||
toolCall={data.toolCalls[data.toolCalls.length - 1]}
|
||||
compact={true}
|
||||
/>
|
||||
{/* Show count of additional tool calls if there are more than 1 */}
|
||||
{data.toolCalls.length > 1 && !data.pendingConfirmation && (
|
||||
<Box flexDirection="row" paddingLeft={4}>
|
||||
<Text color={Colors.Gray}>
|
||||
+{data.toolCalls.length - 1} more tool calls (ctrl+r to
|
||||
expand)
|
||||
</Text>
|
||||
</Box>
|
||||
)}
|
||||
</Box>
|
||||
)}
|
||||
|
||||
{/* Inline approval prompt when awaiting confirmation */}
|
||||
{data.pendingConfirmation && (
|
||||
<Box flexDirection="column" marginTop={1} paddingLeft={1}>
|
||||
<ToolConfirmationMessage
|
||||
confirmationDetails={data.pendingConfirmation}
|
||||
isFocused={true}
|
||||
availableTerminalHeight={availableHeight}
|
||||
terminalWidth={childWidth}
|
||||
compactMode={true}
|
||||
config={config}
|
||||
/>
|
||||
</Box>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
|
||||
{/* Completed state: Show summary line */}
|
||||
{data.status === 'completed' && data.executionSummary && (
|
||||
<Box flexDirection="row" marginTop={1}>
|
||||
<Text color={theme.text.secondary}>
|
||||
Execution Summary: {data.executionSummary.totalToolCalls} tool
|
||||
uses · {data.executionSummary.totalTokens.toLocaleString()} tokens
|
||||
· {fmtDuration(data.executionSummary.totalDurationMs)}
|
||||
</Text>
|
||||
</Box>
|
||||
)}
|
||||
|
||||
{/* Failed/Cancelled state: Show error reason */}
|
||||
{data.status === 'failed' && (
|
||||
<Box flexDirection="row" marginTop={1}>
|
||||
<Text color={theme.status.error}>
|
||||
Failed: {data.terminateReason}
|
||||
</Text>
|
||||
</Box>
|
||||
)}
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
|
||||
// Default and verbose modes use normal layout
|
||||
return (
|
||||
<Box flexDirection="column" paddingX={1} gap={1}>
|
||||
{/* Header with subagent name and status */}
|
||||
@@ -243,8 +158,7 @@ export const AgentExecutionDisplay: React.FC<AgentExecutionDisplayProps> = ({
|
||||
config={config}
|
||||
isFocused={true}
|
||||
availableTerminalHeight={availableHeight}
|
||||
terminalWidth={childWidth}
|
||||
compactMode={true}
|
||||
terminalWidth={childWidth ?? 80}
|
||||
/>
|
||||
</Box>
|
||||
)}
|
||||
@@ -366,8 +280,7 @@ const ToolCallItem: React.FC<{
|
||||
resultDisplay?: string;
|
||||
description?: string;
|
||||
};
|
||||
compact?: boolean;
|
||||
}> = ({ toolCall, compact = false }) => {
|
||||
}> = ({ toolCall }) => {
|
||||
const STATUS_INDICATOR_WIDTH = 3;
|
||||
|
||||
// Map subagent status to ToolCallStatus-like display
|
||||
@@ -422,8 +335,8 @@ const ToolCallItem: React.FC<{
|
||||
</Text>
|
||||
</Box>
|
||||
|
||||
{/* Second line: truncated returnDisplay output - hidden in compact mode */}
|
||||
{!compact && truncatedOutput && (
|
||||
{/* Second line: truncated returnDisplay output */}
|
||||
{truncatedOutput && (
|
||||
<Box flexDirection="row" paddingLeft={STATUS_INDICATOR_WIDTH}>
|
||||
<Text color={Colors.Gray}>{truncatedOutput}</Text>
|
||||
</Box>
|
||||
|
||||
@@ -58,16 +58,11 @@ describe('KeypressContext - Kitty Protocol', () => {
|
||||
const wrapper = ({
|
||||
children,
|
||||
kittyProtocolEnabled = true,
|
||||
pasteWorkaround = false,
|
||||
}: {
|
||||
children: React.ReactNode;
|
||||
kittyProtocolEnabled?: boolean;
|
||||
pasteWorkaround?: boolean;
|
||||
}) => (
|
||||
<KeypressProvider
|
||||
kittyProtocolEnabled={kittyProtocolEnabled}
|
||||
pasteWorkaround={pasteWorkaround}
|
||||
>
|
||||
<KeypressProvider kittyProtocolEnabled={kittyProtocolEnabled}>
|
||||
{children}
|
||||
</KeypressProvider>
|
||||
);
|
||||
@@ -384,722 +379,6 @@ describe('KeypressContext - Kitty Protocol', () => {
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
describe('paste mode markers', () => {
|
||||
// These tests use pasteWorkaround=true to force passthrough mode for raw keypress testing
|
||||
|
||||
it('should handle complete paste sequence with markers', async () => {
|
||||
const keyHandler = vi.fn();
|
||||
const pastedText = 'pasted content';
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) =>
|
||||
wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
// Send complete paste sequence: prefix + content + suffix
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from(`\x1b[200~${pastedText}\x1b[201~`));
|
||||
});
|
||||
|
||||
await waitFor(() => {
|
||||
expect(keyHandler).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
// Should emit a single paste event with the content
|
||||
expect(keyHandler).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
paste: true,
|
||||
sequence: pastedText,
|
||||
name: '',
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle empty paste sequence', async () => {
|
||||
const keyHandler = vi.fn();
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) =>
|
||||
wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
// Send empty paste sequence: prefix immediately followed by suffix
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from('\x1b[200~\x1b[201~'));
|
||||
});
|
||||
|
||||
await waitFor(() => {
|
||||
expect(keyHandler).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
// Should emit a paste event with empty content
|
||||
expect(keyHandler).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
paste: true,
|
||||
sequence: '',
|
||||
name: '',
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle data before paste markers', async () => {
|
||||
const keyHandler = vi.fn();
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) =>
|
||||
wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
// Send data before paste sequence
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from('before\x1b[200~pasted\x1b[201~'));
|
||||
});
|
||||
|
||||
await waitFor(() => {
|
||||
expect(keyHandler).toHaveBeenCalledTimes(7); // 6 chars + 1 paste event
|
||||
});
|
||||
|
||||
// Should process 'before' as individual characters
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
expect.objectContaining({ name: 'b' }),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
expect.objectContaining({ name: 'e' }),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
3,
|
||||
expect.objectContaining({ name: 'f' }),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
4,
|
||||
expect.objectContaining({ name: 'o' }),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
5,
|
||||
expect.objectContaining({ name: 'r' }),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
6,
|
||||
expect.objectContaining({ name: 'e' }),
|
||||
);
|
||||
|
||||
// Then emit paste event
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
7,
|
||||
expect.objectContaining({
|
||||
paste: true,
|
||||
sequence: 'pasted',
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle data after paste markers', async () => {
|
||||
const keyHandler = vi.fn();
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) =>
|
||||
wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
// Send paste sequence followed by data
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from('\x1b[200~pasted\x1b[201~after'));
|
||||
});
|
||||
|
||||
await waitFor(() => {
|
||||
expect(keyHandler).toHaveBeenCalledTimes(2); // 1 paste event + 1 paste event for 'after'
|
||||
});
|
||||
|
||||
// Should emit paste event first
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
expect.objectContaining({
|
||||
paste: true,
|
||||
sequence: 'pasted',
|
||||
}),
|
||||
);
|
||||
|
||||
// Then process 'after' as a paste event (since it's > 2 chars)
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
expect.objectContaining({
|
||||
paste: true,
|
||||
sequence: 'after',
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle complex sequence with multiple paste blocks', async () => {
|
||||
const keyHandler = vi.fn();
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) =>
|
||||
wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
// Send complex sequence: data + paste1 + data + paste2 + data
|
||||
act(() => {
|
||||
stdin.emit(
|
||||
'data',
|
||||
Buffer.from(
|
||||
'start\x1b[200~first\x1b[201~middle\x1b[200~second\x1b[201~end',
|
||||
),
|
||||
);
|
||||
});
|
||||
|
||||
await waitFor(() => {
|
||||
expect(keyHandler).toHaveBeenCalledTimes(14); // Adjusted based on actual behavior
|
||||
});
|
||||
|
||||
// Check the sequence: 'start' (5 chars) + paste1 + 'middle' (6 chars) + paste2 + 'end' (3 chars as paste)
|
||||
let callIndex = 1;
|
||||
|
||||
// 'start'
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
callIndex++,
|
||||
expect.objectContaining({ name: 's' }),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
callIndex++,
|
||||
expect.objectContaining({ name: 't' }),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
callIndex++,
|
||||
expect.objectContaining({ name: 'a' }),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
callIndex++,
|
||||
expect.objectContaining({ name: 'r' }),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
callIndex++,
|
||||
expect.objectContaining({ name: 't' }),
|
||||
);
|
||||
|
||||
// first paste
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
callIndex++,
|
||||
expect.objectContaining({
|
||||
paste: true,
|
||||
sequence: 'first',
|
||||
}),
|
||||
);
|
||||
|
||||
// 'middle'
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
callIndex++,
|
||||
expect.objectContaining({ name: 'm' }),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
callIndex++,
|
||||
expect.objectContaining({ name: 'i' }),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
callIndex++,
|
||||
expect.objectContaining({ name: 'd' }),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
callIndex++,
|
||||
expect.objectContaining({ name: 'd' }),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
callIndex++,
|
||||
expect.objectContaining({ name: 'l' }),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
callIndex++,
|
||||
expect.objectContaining({ name: 'e' }),
|
||||
);
|
||||
|
||||
// second paste
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
callIndex++,
|
||||
expect.objectContaining({
|
||||
paste: true,
|
||||
sequence: 'second',
|
||||
}),
|
||||
);
|
||||
|
||||
// 'end' as paste event (since it's > 2 chars)
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
callIndex++,
|
||||
expect.objectContaining({
|
||||
paste: true,
|
||||
sequence: 'end',
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle fragmented paste markers across multiple data events', async () => {
|
||||
const keyHandler = vi.fn();
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) =>
|
||||
wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
// Send fragmented paste sequence
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from('\x1b[200~partial'));
|
||||
stdin.emit('data', Buffer.from(' content\x1b[201~'));
|
||||
});
|
||||
|
||||
await waitFor(() => {
|
||||
expect(keyHandler).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
// Should combine the fragmented content into a single paste event
|
||||
expect(keyHandler).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
paste: true,
|
||||
sequence: 'partial content',
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle multiline content within paste markers', async () => {
|
||||
const keyHandler = vi.fn();
|
||||
const multilineContent = 'line1\nline2\nline3';
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) =>
|
||||
wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
// Send paste sequence with multiline content
|
||||
act(() => {
|
||||
stdin.emit(
|
||||
'data',
|
||||
Buffer.from(`\x1b[200~${multilineContent}\x1b[201~`),
|
||||
);
|
||||
});
|
||||
|
||||
await waitFor(() => {
|
||||
expect(keyHandler).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
// Should emit a single paste event with the multiline content
|
||||
expect(keyHandler).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
paste: true,
|
||||
sequence: multilineContent,
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle paste markers split across buffer boundaries', async () => {
|
||||
const keyHandler = vi.fn();
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) =>
|
||||
wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
// Send paste marker split across multiple data events
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from('\x1b[20'));
|
||||
stdin.emit('data', Buffer.from('0~content\x1b[2'));
|
||||
stdin.emit('data', Buffer.from('01~'));
|
||||
});
|
||||
|
||||
await waitFor(() => {
|
||||
// With the current implementation, fragmented data gets processed differently
|
||||
// The first fragment '\x1b[20' gets processed as individual characters
|
||||
// The second fragment '0~content\x1b[2' gets processed as paste + individual chars
|
||||
// The third fragment '01~' gets processed as individual characters
|
||||
expect(keyHandler).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
// The current implementation processes fragmented paste markers as separate events
|
||||
// rather than reconstructing them into a single paste event
|
||||
expect(keyHandler.mock.calls.length).toBeGreaterThan(1);
|
||||
});
|
||||
});
|
||||
|
||||
it('buffers fragmented paste chunks before emitting newlines', () => {
|
||||
vi.useFakeTimers();
|
||||
const keyHandler = vi.fn();
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) => wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
try {
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from('\r'));
|
||||
stdin.emit('data', Buffer.from('rest of paste'));
|
||||
});
|
||||
|
||||
act(() => {
|
||||
vi.advanceTimersByTime(8);
|
||||
});
|
||||
|
||||
// With the current implementation, fragmented data gets combined and
|
||||
// treated as a single paste event due to the buffering mechanism
|
||||
expect(keyHandler).toHaveBeenCalledTimes(1);
|
||||
|
||||
// Should be treated as a paste event with the combined content
|
||||
expect(keyHandler).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
paste: true,
|
||||
sequence: '\rrest of paste',
|
||||
}),
|
||||
);
|
||||
} finally {
|
||||
vi.useRealTimers();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe('Raw keypress pipeline', () => {
|
||||
// These tests use pasteWorkaround=true to force passthrough mode for raw keypress testing
|
||||
|
||||
it('should buffer input data and wait for timeout', () => {
|
||||
vi.useFakeTimers();
|
||||
const keyHandler = vi.fn();
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) => wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
try {
|
||||
// Send single character
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from('a'));
|
||||
});
|
||||
|
||||
// With the current implementation, single characters are processed immediately
|
||||
expect(keyHandler).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
name: 'a',
|
||||
sequence: 'a',
|
||||
}),
|
||||
);
|
||||
} finally {
|
||||
vi.useRealTimers();
|
||||
}
|
||||
});
|
||||
|
||||
it('should concatenate new data and reset timeout', () => {
|
||||
vi.useFakeTimers();
|
||||
const keyHandler = vi.fn();
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) => wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
try {
|
||||
// Send first chunk
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from('hel'));
|
||||
});
|
||||
|
||||
// Advance timer partially
|
||||
act(() => {
|
||||
vi.advanceTimersByTime(4);
|
||||
});
|
||||
|
||||
// Send second chunk before timeout
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from('lo'));
|
||||
});
|
||||
|
||||
// With the current implementation, data is processed as it arrives
|
||||
// First chunk 'hel' is treated as paste (multi-character)
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
expect.objectContaining({
|
||||
paste: true,
|
||||
sequence: 'hel',
|
||||
}),
|
||||
);
|
||||
|
||||
// Second chunk 'lo' is processed as individual characters
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
expect.objectContaining({
|
||||
name: 'l',
|
||||
sequence: 'l',
|
||||
paste: false,
|
||||
}),
|
||||
);
|
||||
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
3,
|
||||
expect.objectContaining({
|
||||
name: 'o',
|
||||
sequence: 'o',
|
||||
paste: false,
|
||||
}),
|
||||
);
|
||||
|
||||
expect(keyHandler).toHaveBeenCalledTimes(3);
|
||||
} finally {
|
||||
vi.useRealTimers();
|
||||
}
|
||||
});
|
||||
|
||||
it('should flush immediately when buffer exceeds limit', () => {
|
||||
vi.useFakeTimers();
|
||||
const keyHandler = vi.fn();
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) => wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
try {
|
||||
// Create a large buffer that exceeds the 64 byte limit
|
||||
const largeData = 'x'.repeat(65);
|
||||
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from(largeData));
|
||||
});
|
||||
|
||||
// Should flush immediately without waiting for timeout
|
||||
// Large data gets treated as paste event
|
||||
expect(keyHandler).toHaveBeenCalledTimes(1);
|
||||
expect(keyHandler).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
paste: true,
|
||||
sequence: largeData,
|
||||
}),
|
||||
);
|
||||
|
||||
// Advancing timer should not cause additional calls
|
||||
const callCountBefore = keyHandler.mock.calls.length;
|
||||
act(() => {
|
||||
vi.advanceTimersByTime(8);
|
||||
});
|
||||
|
||||
expect(keyHandler).toHaveBeenCalledTimes(callCountBefore);
|
||||
} finally {
|
||||
vi.useRealTimers();
|
||||
}
|
||||
});
|
||||
|
||||
it('should clear timeout when new data arrives', () => {
|
||||
vi.useFakeTimers();
|
||||
const keyHandler = vi.fn();
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) => wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
try {
|
||||
// Send first chunk
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from('a'));
|
||||
});
|
||||
|
||||
// Advance timer almost to completion
|
||||
act(() => {
|
||||
vi.advanceTimersByTime(7);
|
||||
});
|
||||
|
||||
// Send second chunk (should reset timeout)
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from('b'));
|
||||
});
|
||||
|
||||
// With the current implementation, both characters are processed immediately
|
||||
expect(keyHandler).toHaveBeenCalledTimes(2);
|
||||
|
||||
// First event should be 'a', second should be 'b'
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
expect.objectContaining({
|
||||
name: 'a',
|
||||
sequence: 'a',
|
||||
paste: false,
|
||||
}),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
expect.objectContaining({
|
||||
name: 'b',
|
||||
sequence: 'b',
|
||||
paste: false,
|
||||
}),
|
||||
);
|
||||
} finally {
|
||||
vi.useRealTimers();
|
||||
}
|
||||
});
|
||||
|
||||
it('should handle multiple separate keypress events', () => {
|
||||
vi.useFakeTimers();
|
||||
const keyHandler = vi.fn();
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) => wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
try {
|
||||
// First keypress
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from('a'));
|
||||
});
|
||||
|
||||
act(() => {
|
||||
vi.advanceTimersByTime(8);
|
||||
});
|
||||
|
||||
expect(keyHandler).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
sequence: 'a',
|
||||
}),
|
||||
);
|
||||
|
||||
keyHandler.mockClear();
|
||||
|
||||
// Second keypress after first completed
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from('b'));
|
||||
});
|
||||
|
||||
act(() => {
|
||||
vi.advanceTimersByTime(8);
|
||||
});
|
||||
|
||||
expect(keyHandler).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
sequence: 'b',
|
||||
}),
|
||||
);
|
||||
} finally {
|
||||
vi.useRealTimers();
|
||||
}
|
||||
});
|
||||
|
||||
it('should handle rapid sequential data within buffer limit', () => {
|
||||
vi.useFakeTimers();
|
||||
const keyHandler = vi.fn();
|
||||
|
||||
const { result } = renderHook(() => useKeypressContext(), {
|
||||
wrapper: ({ children }) => wrapper({ children, pasteWorkaround: true }),
|
||||
});
|
||||
|
||||
act(() => {
|
||||
result.current.subscribe(keyHandler);
|
||||
});
|
||||
|
||||
try {
|
||||
// Send multiple small chunks rapidly
|
||||
act(() => {
|
||||
stdin.emit('data', Buffer.from('h'));
|
||||
stdin.emit('data', Buffer.from('e'));
|
||||
stdin.emit('data', Buffer.from('l'));
|
||||
stdin.emit('data', Buffer.from('l'));
|
||||
stdin.emit('data', Buffer.from('o'));
|
||||
});
|
||||
|
||||
// With the current implementation, each character is processed immediately
|
||||
expect(keyHandler).toHaveBeenCalledTimes(5);
|
||||
|
||||
// Each character should be processed as individual keypress events
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
expect.objectContaining({
|
||||
name: 'h',
|
||||
sequence: 'h',
|
||||
paste: false,
|
||||
}),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
expect.objectContaining({
|
||||
name: 'e',
|
||||
sequence: 'e',
|
||||
paste: false,
|
||||
}),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
3,
|
||||
expect.objectContaining({
|
||||
name: 'l',
|
||||
sequence: 'l',
|
||||
paste: false,
|
||||
}),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
4,
|
||||
expect.objectContaining({
|
||||
name: 'l',
|
||||
sequence: 'l',
|
||||
paste: false,
|
||||
}),
|
||||
);
|
||||
expect(keyHandler).toHaveBeenNthCalledWith(
|
||||
5,
|
||||
expect.objectContaining({
|
||||
name: 'o',
|
||||
sequence: 'o',
|
||||
paste: false,
|
||||
}),
|
||||
);
|
||||
} finally {
|
||||
vi.useRealTimers();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe('debug keystroke logging', () => {
|
||||
|
||||
@@ -71,13 +71,11 @@ export function useKeypressContext() {
|
||||
export function KeypressProvider({
|
||||
children,
|
||||
kittyProtocolEnabled,
|
||||
pasteWorkaround = false,
|
||||
config,
|
||||
debugKeystrokeLogging,
|
||||
}: {
|
||||
children?: React.ReactNode;
|
||||
children: React.ReactNode;
|
||||
kittyProtocolEnabled: boolean;
|
||||
pasteWorkaround?: boolean;
|
||||
config?: Config;
|
||||
debugKeystrokeLogging?: boolean;
|
||||
}) {
|
||||
@@ -103,8 +101,12 @@ export function KeypressProvider({
|
||||
|
||||
const keypressStream = new PassThrough();
|
||||
let usePassthrough = false;
|
||||
// Use passthrough mode when pasteWorkaround is enabled,
|
||||
if (pasteWorkaround) {
|
||||
const nodeMajorVersion = parseInt(process.versions.node.split('.')[0], 10);
|
||||
if (
|
||||
nodeMajorVersion < 20 ||
|
||||
process.env['PASTE_WORKAROUND'] === '1' ||
|
||||
process.env['PASTE_WORKAROUND'] === 'true'
|
||||
) {
|
||||
usePassthrough = true;
|
||||
}
|
||||
|
||||
@@ -113,8 +115,6 @@ export function KeypressProvider({
|
||||
let kittySequenceBuffer = '';
|
||||
let backslashTimeout: NodeJS.Timeout | null = null;
|
||||
let waitingForEnterAfterBackslash = false;
|
||||
let rawDataBuffer = Buffer.alloc(0);
|
||||
let rawFlushTimeout: NodeJS.Timeout | null = null;
|
||||
|
||||
const parseKittySequence = (sequence: string): Key | null => {
|
||||
const kittyPattern = new RegExp(`^${ESC}\\[(\\d+)(;(\\d+))?([u~])$`);
|
||||
@@ -332,111 +332,54 @@ export function KeypressProvider({
|
||||
broadcast({ ...key, paste: isPaste });
|
||||
};
|
||||
|
||||
const clearRawFlushTimeout = () => {
|
||||
if (rawFlushTimeout) {
|
||||
clearTimeout(rawFlushTimeout);
|
||||
rawFlushTimeout = null;
|
||||
}
|
||||
};
|
||||
|
||||
const createPasteKeyEvent = (
|
||||
name: 'paste-start' | 'paste-end' | '' = '',
|
||||
sequence: string = '',
|
||||
): Key => ({
|
||||
name,
|
||||
ctrl: false,
|
||||
meta: false,
|
||||
shift: false,
|
||||
paste: false,
|
||||
sequence,
|
||||
});
|
||||
|
||||
const flushRawBuffer = () => {
|
||||
if (!rawDataBuffer.length) {
|
||||
return;
|
||||
}
|
||||
|
||||
const handleRawKeypress = (data: Buffer) => {
|
||||
const pasteModePrefixBuffer = Buffer.from(PASTE_MODE_PREFIX);
|
||||
const pasteModeSuffixBuffer = Buffer.from(PASTE_MODE_SUFFIX);
|
||||
const data = rawDataBuffer;
|
||||
let cursor = 0;
|
||||
|
||||
while (cursor < data.length) {
|
||||
const prefixPos = data.indexOf(pasteModePrefixBuffer, cursor);
|
||||
const suffixPos = data.indexOf(pasteModeSuffixBuffer, cursor);
|
||||
const hasPrefix =
|
||||
prefixPos !== -1 &&
|
||||
prefixPos + pasteModePrefixBuffer.length <= data.length;
|
||||
const hasSuffix =
|
||||
suffixPos !== -1 &&
|
||||
suffixPos + pasteModeSuffixBuffer.length <= data.length;
|
||||
let pos = 0;
|
||||
while (pos < data.length) {
|
||||
const prefixPos = data.indexOf(pasteModePrefixBuffer, pos);
|
||||
const suffixPos = data.indexOf(pasteModeSuffixBuffer, pos);
|
||||
const isPrefixNext =
|
||||
prefixPos !== -1 && (suffixPos === -1 || prefixPos < suffixPos);
|
||||
const isSuffixNext =
|
||||
suffixPos !== -1 && (prefixPos === -1 || suffixPos < prefixPos);
|
||||
|
||||
let markerPos = -1;
|
||||
let nextMarkerPos = -1;
|
||||
let markerLength = 0;
|
||||
let markerType: 'prefix' | 'suffix' | null = null;
|
||||
|
||||
if (hasPrefix && (!hasSuffix || prefixPos < suffixPos)) {
|
||||
markerPos = prefixPos;
|
||||
markerLength = pasteModePrefixBuffer.length;
|
||||
markerType = 'prefix';
|
||||
} else if (hasSuffix) {
|
||||
markerPos = suffixPos;
|
||||
markerLength = pasteModeSuffixBuffer.length;
|
||||
markerType = 'suffix';
|
||||
if (isPrefixNext) {
|
||||
nextMarkerPos = prefixPos;
|
||||
} else if (isSuffixNext) {
|
||||
nextMarkerPos = suffixPos;
|
||||
}
|
||||
markerLength = pasteModeSuffixBuffer.length;
|
||||
|
||||
if (nextMarkerPos === -1) {
|
||||
keypressStream.write(data.slice(pos));
|
||||
return;
|
||||
}
|
||||
|
||||
if (markerPos === -1) {
|
||||
break;
|
||||
}
|
||||
|
||||
const nextData = data.slice(cursor, markerPos);
|
||||
const nextData = data.slice(pos, nextMarkerPos);
|
||||
if (nextData.length > 0) {
|
||||
keypressStream.write(nextData);
|
||||
}
|
||||
if (markerType === 'prefix') {
|
||||
const createPasteKeyEvent = (
|
||||
name: 'paste-start' | 'paste-end',
|
||||
): Key => ({
|
||||
name,
|
||||
ctrl: false,
|
||||
meta: false,
|
||||
shift: false,
|
||||
paste: false,
|
||||
sequence: '',
|
||||
});
|
||||
if (isPrefixNext) {
|
||||
handleKeypress(undefined, createPasteKeyEvent('paste-start'));
|
||||
} else if (markerType === 'suffix') {
|
||||
} else if (isSuffixNext) {
|
||||
handleKeypress(undefined, createPasteKeyEvent('paste-end'));
|
||||
}
|
||||
cursor = markerPos + markerLength;
|
||||
}
|
||||
|
||||
rawDataBuffer = data.slice(cursor);
|
||||
|
||||
if (rawDataBuffer.length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (rawDataBuffer.length <= 2 || isPaste) {
|
||||
keypressStream.write(rawDataBuffer);
|
||||
} else {
|
||||
// Flush raw data buffer as a paste event
|
||||
handleKeypress(undefined, createPasteKeyEvent('paste-start'));
|
||||
keypressStream.write(rawDataBuffer);
|
||||
handleKeypress(undefined, createPasteKeyEvent('paste-end'));
|
||||
}
|
||||
|
||||
rawDataBuffer = Buffer.alloc(0);
|
||||
clearRawFlushTimeout();
|
||||
};
|
||||
|
||||
const handleRawKeypress = (_data: Buffer) => {
|
||||
const data = Buffer.isBuffer(_data) ? _data : Buffer.from(_data, 'utf8');
|
||||
|
||||
// Buffer the incoming data
|
||||
rawDataBuffer = Buffer.concat([rawDataBuffer, data]);
|
||||
|
||||
clearRawFlushTimeout();
|
||||
|
||||
// On some Windows terminals, during a paste, the terminal might send a
|
||||
// single return character chunk. In this case, we need to wait a time period
|
||||
// to know if it is part of a paste or just a return character.
|
||||
const isReturnChar =
|
||||
rawDataBuffer.length <= 2 && rawDataBuffer.includes(0x0d);
|
||||
if (isReturnChar) {
|
||||
rawFlushTimeout = setTimeout(flushRawBuffer, 100);
|
||||
} else {
|
||||
flushRawBuffer();
|
||||
pos = nextMarkerPos + markerLength;
|
||||
}
|
||||
};
|
||||
|
||||
@@ -473,11 +416,6 @@ export function KeypressProvider({
|
||||
backslashTimeout = null;
|
||||
}
|
||||
|
||||
if (rawFlushTimeout) {
|
||||
clearTimeout(rawFlushTimeout);
|
||||
rawFlushTimeout = null;
|
||||
}
|
||||
|
||||
// Flush any pending paste data to avoid data loss on exit.
|
||||
if (isPaste) {
|
||||
broadcast({
|
||||
@@ -495,10 +433,9 @@ export function KeypressProvider({
|
||||
stdin,
|
||||
setRawMode,
|
||||
kittyProtocolEnabled,
|
||||
debugKeystrokeLogging,
|
||||
pasteWorkaround,
|
||||
config,
|
||||
subscribers,
|
||||
debugKeystrokeLogging,
|
||||
]);
|
||||
|
||||
return (
|
||||
|
||||
@@ -106,7 +106,6 @@ describe('useSlashCommandProcessor', () => {
|
||||
const mockLoadHistory = vi.fn();
|
||||
const mockOpenThemeDialog = vi.fn();
|
||||
const mockOpenAuthDialog = vi.fn();
|
||||
const mockOpenModelSelectionDialog = vi.fn();
|
||||
const mockSetQuittingMessages = vi.fn();
|
||||
|
||||
const mockConfig = makeFakeConfig({});
|
||||
@@ -123,7 +122,6 @@ describe('useSlashCommandProcessor', () => {
|
||||
mockBuiltinLoadCommands.mockResolvedValue([]);
|
||||
mockFileLoadCommands.mockResolvedValue([]);
|
||||
mockMcpLoadCommands.mockResolvedValue([]);
|
||||
mockOpenModelSelectionDialog.mockClear();
|
||||
});
|
||||
|
||||
const setupProcessorHook = (
|
||||
@@ -152,13 +150,11 @@ describe('useSlashCommandProcessor', () => {
|
||||
mockSetQuittingMessages,
|
||||
vi.fn(), // openPrivacyNotice
|
||||
vi.fn(), // openSettingsDialog
|
||||
mockOpenModelSelectionDialog,
|
||||
vi.fn(), // openSubagentCreateDialog
|
||||
vi.fn(), // openAgentsManagerDialog
|
||||
vi.fn(), // toggleVimEnabled
|
||||
setIsProcessing,
|
||||
vi.fn(), // setGeminiMdFileCount
|
||||
vi.fn(), // _showQuitConfirmation
|
||||
),
|
||||
);
|
||||
|
||||
@@ -399,21 +395,6 @@ describe('useSlashCommandProcessor', () => {
|
||||
expect(mockOpenThemeDialog).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should handle "dialog: model" action', async () => {
|
||||
const command = createTestCommand({
|
||||
name: 'modelcmd',
|
||||
action: vi.fn().mockResolvedValue({ type: 'dialog', dialog: 'model' }),
|
||||
});
|
||||
const result = setupProcessorHook([command]);
|
||||
await waitFor(() => expect(result.current.slashCommands).toHaveLength(1));
|
||||
|
||||
await act(async () => {
|
||||
await result.current.handleSlashCommand('/modelcmd');
|
||||
});
|
||||
|
||||
expect(mockOpenModelSelectionDialog).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should handle "load_history" action', async () => {
|
||||
const command = createTestCommand({
|
||||
name: 'load',
|
||||
@@ -923,13 +904,11 @@ describe('useSlashCommandProcessor', () => {
|
||||
mockSetQuittingMessages,
|
||||
vi.fn(), // openPrivacyNotice
|
||||
vi.fn(), // openSettingsDialog
|
||||
vi.fn(), // openModelSelectionDialog
|
||||
vi.fn(), // openSubagentCreateDialog
|
||||
vi.fn(), // openAgentsManagerDialog
|
||||
vi.fn(), // toggleVimEnabled
|
||||
vi.fn(), // setIsProcessing
|
||||
vi.fn(), // setGeminiMdFileCount
|
||||
vi.fn(), // _showQuitConfirmation
|
||||
),
|
||||
);
|
||||
|
||||
|
||||
@@ -53,7 +53,6 @@ export const useSlashCommandProcessor = (
|
||||
setQuittingMessages: (message: HistoryItem[]) => void,
|
||||
openPrivacyNotice: () => void,
|
||||
openSettingsDialog: () => void,
|
||||
openModelSelectionDialog: () => void,
|
||||
openSubagentCreateDialog: () => void,
|
||||
openAgentsManagerDialog: () => void,
|
||||
toggleVimEnabled: () => Promise<boolean>,
|
||||
@@ -405,9 +404,6 @@ export const useSlashCommandProcessor = (
|
||||
case 'settings':
|
||||
openSettingsDialog();
|
||||
return { type: 'handled' };
|
||||
case 'model':
|
||||
openModelSelectionDialog();
|
||||
return { type: 'handled' };
|
||||
case 'subagent_create':
|
||||
openSubagentCreateDialog();
|
||||
return { type: 'handled' };
|
||||
@@ -511,7 +507,7 @@ export const useSlashCommandProcessor = (
|
||||
setTimeout(async () => {
|
||||
await runExitCleanup();
|
||||
process.exit(0);
|
||||
}, 100);
|
||||
}, 1000);
|
||||
return { type: 'handled' };
|
||||
|
||||
case 'submit_prompt':
|
||||
@@ -667,7 +663,6 @@ export const useSlashCommandProcessor = (
|
||||
setSessionShellAllowlist,
|
||||
setIsProcessing,
|
||||
setConfirmationRequest,
|
||||
openModelSelectionDialog,
|
||||
session.stats,
|
||||
],
|
||||
);
|
||||
|
||||
@@ -56,12 +56,6 @@ const MockedUserPromptEvent = vi.hoisted(() =>
|
||||
);
|
||||
const mockParseAndFormatApiError = vi.hoisted(() => vi.fn());
|
||||
|
||||
// Vision auto-switch mocks (hoisted)
|
||||
const mockHandleVisionSwitch = vi.hoisted(() =>
|
||||
vi.fn().mockResolvedValue({ shouldProceed: true }),
|
||||
);
|
||||
const mockRestoreOriginalModel = vi.hoisted(() => vi.fn());
|
||||
|
||||
vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => {
|
||||
const actualCoreModule = (await importOriginal()) as any;
|
||||
return {
|
||||
@@ -82,13 +76,6 @@ vi.mock('./useReactToolScheduler.js', async (importOriginal) => {
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock('./useVisionAutoSwitch.js', () => ({
|
||||
useVisionAutoSwitch: vi.fn(() => ({
|
||||
handleVisionSwitch: mockHandleVisionSwitch,
|
||||
restoreOriginalModel: mockRestoreOriginalModel,
|
||||
})),
|
||||
}));
|
||||
|
||||
vi.mock('./useKeypress.js', () => ({
|
||||
useKeypress: vi.fn(),
|
||||
}));
|
||||
@@ -212,7 +199,6 @@ describe('useGeminiStream', () => {
|
||||
getContentGeneratorConfig: vi
|
||||
.fn()
|
||||
.mockReturnValue(contentGeneratorConfig),
|
||||
getMaxSessionTurns: vi.fn(() => 50),
|
||||
} as unknown as Config;
|
||||
mockOnDebugMessage = vi.fn();
|
||||
mockHandleSlashCommand = vi.fn().mockResolvedValue(false);
|
||||
@@ -1565,7 +1551,6 @@ describe('useGeminiStream', () => {
|
||||
expect.any(String), // Argument 3: The prompt_id string
|
||||
);
|
||||
});
|
||||
|
||||
describe('Thought Reset', () => {
|
||||
it('should reset thought to null when starting a new prompt', async () => {
|
||||
// First, simulate a response with a thought
|
||||
@@ -1915,166 +1900,4 @@ describe('useGeminiStream', () => {
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// --- New tests focused on recent modifications ---
|
||||
describe('Vision Auto Switch Integration', () => {
|
||||
it('should call handleVisionSwitch and proceed to send when allowed', async () => {
|
||||
mockHandleVisionSwitch.mockResolvedValueOnce({ shouldProceed: true });
|
||||
mockSendMessageStream.mockReturnValue(
|
||||
(async function* () {
|
||||
yield { type: ServerGeminiEventType.Content, value: 'ok' };
|
||||
yield { type: ServerGeminiEventType.Finished, value: 'STOP' };
|
||||
})(),
|
||||
);
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useGeminiStream(
|
||||
new MockedGeminiClientClass(mockConfig),
|
||||
[],
|
||||
mockAddItem,
|
||||
mockConfig,
|
||||
mockOnDebugMessage,
|
||||
mockHandleSlashCommand,
|
||||
false,
|
||||
() => 'vscode' as EditorType,
|
||||
() => {},
|
||||
() => Promise.resolve(),
|
||||
false,
|
||||
() => {},
|
||||
() => {},
|
||||
() => {},
|
||||
),
|
||||
);
|
||||
|
||||
await act(async () => {
|
||||
await result.current.submitQuery('image prompt');
|
||||
});
|
||||
|
||||
await waitFor(() => {
|
||||
expect(mockHandleVisionSwitch).toHaveBeenCalled();
|
||||
expect(mockSendMessageStream).toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
it('should gate submission when handleVisionSwitch returns shouldProceed=false', async () => {
|
||||
mockHandleVisionSwitch.mockResolvedValueOnce({ shouldProceed: false });
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useGeminiStream(
|
||||
new MockedGeminiClientClass(mockConfig),
|
||||
[],
|
||||
mockAddItem,
|
||||
mockConfig,
|
||||
mockOnDebugMessage,
|
||||
mockHandleSlashCommand,
|
||||
false,
|
||||
() => 'vscode' as EditorType,
|
||||
() => {},
|
||||
() => Promise.resolve(),
|
||||
false,
|
||||
() => {},
|
||||
() => {},
|
||||
() => {},
|
||||
),
|
||||
);
|
||||
|
||||
await act(async () => {
|
||||
await result.current.submitQuery('vision-gated');
|
||||
});
|
||||
|
||||
// No call to API, no restoreOriginalModel needed since no override occurred
|
||||
expect(mockSendMessageStream).not.toHaveBeenCalled();
|
||||
expect(mockRestoreOriginalModel).not.toHaveBeenCalled();
|
||||
|
||||
// Next call allowed (flag reset path)
|
||||
mockHandleVisionSwitch.mockResolvedValueOnce({ shouldProceed: true });
|
||||
mockSendMessageStream.mockReturnValue(
|
||||
(async function* () {
|
||||
yield { type: ServerGeminiEventType.Content, value: 'ok' };
|
||||
yield { type: ServerGeminiEventType.Finished, value: 'STOP' };
|
||||
})(),
|
||||
);
|
||||
await act(async () => {
|
||||
await result.current.submitQuery('after-gate');
|
||||
});
|
||||
await waitFor(() => {
|
||||
expect(mockSendMessageStream).toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('Model restore on completion and errors', () => {
|
||||
it('should restore model after successful stream completion', async () => {
|
||||
mockSendMessageStream.mockReturnValue(
|
||||
(async function* () {
|
||||
yield { type: ServerGeminiEventType.Content, value: 'content' };
|
||||
yield { type: ServerGeminiEventType.Finished, value: 'STOP' };
|
||||
})(),
|
||||
);
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useGeminiStream(
|
||||
new MockedGeminiClientClass(mockConfig),
|
||||
[],
|
||||
mockAddItem,
|
||||
mockConfig,
|
||||
mockOnDebugMessage,
|
||||
mockHandleSlashCommand,
|
||||
false,
|
||||
() => 'vscode' as EditorType,
|
||||
() => {},
|
||||
() => Promise.resolve(),
|
||||
false,
|
||||
() => {},
|
||||
() => {},
|
||||
() => {},
|
||||
),
|
||||
);
|
||||
|
||||
await act(async () => {
|
||||
await result.current.submitQuery('restore-success');
|
||||
});
|
||||
|
||||
await waitFor(() => {
|
||||
expect(mockRestoreOriginalModel).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
|
||||
it('should restore model when an error occurs during streaming', async () => {
|
||||
const testError = new Error('stream failure');
|
||||
mockSendMessageStream.mockReturnValue(
|
||||
(async function* () {
|
||||
yield { type: ServerGeminiEventType.Content, value: 'content' };
|
||||
throw testError;
|
||||
})(),
|
||||
);
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useGeminiStream(
|
||||
new MockedGeminiClientClass(mockConfig),
|
||||
[],
|
||||
mockAddItem,
|
||||
mockConfig,
|
||||
mockOnDebugMessage,
|
||||
mockHandleSlashCommand,
|
||||
false,
|
||||
() => 'vscode' as EditorType,
|
||||
() => {},
|
||||
() => Promise.resolve(),
|
||||
false,
|
||||
() => {},
|
||||
() => {},
|
||||
() => {},
|
||||
),
|
||||
);
|
||||
|
||||
await act(async () => {
|
||||
await result.current.submitQuery('restore-error');
|
||||
});
|
||||
|
||||
await waitFor(() => {
|
||||
expect(mockRestoreOriginalModel).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -42,7 +42,6 @@ import type {
|
||||
import { StreamingState, MessageType, ToolCallStatus } from '../types.js';
|
||||
import { isAtCommand, isSlashCommand } from '../utils/commandUtils.js';
|
||||
import { useShellCommandProcessor } from './shellCommandProcessor.js';
|
||||
import { useVisionAutoSwitch } from './useVisionAutoSwitch.js';
|
||||
import { handleAtCommand } from './atCommandProcessor.js';
|
||||
import { findLastSafeSplitPoint } from '../utils/markdownUtilities.js';
|
||||
import { useStateAndRef } from './useStateAndRef.js';
|
||||
@@ -89,12 +88,6 @@ export const useGeminiStream = (
|
||||
setModelSwitchedFromQuotaError: React.Dispatch<React.SetStateAction<boolean>>,
|
||||
onEditorClose: () => void,
|
||||
onCancelSubmit: () => void,
|
||||
visionModelPreviewEnabled: boolean = false,
|
||||
onVisionSwitchRequired?: (query: PartListUnion) => Promise<{
|
||||
modelOverride?: string;
|
||||
persistSessionModel?: string;
|
||||
showGuidance?: boolean;
|
||||
}>,
|
||||
) => {
|
||||
const [initError, setInitError] = useState<string | null>(null);
|
||||
const abortControllerRef = useRef<AbortController | null>(null);
|
||||
@@ -162,13 +155,6 @@ export const useGeminiStream = (
|
||||
geminiClient,
|
||||
);
|
||||
|
||||
const { handleVisionSwitch, restoreOriginalModel } = useVisionAutoSwitch(
|
||||
config,
|
||||
addItem,
|
||||
visionModelPreviewEnabled,
|
||||
onVisionSwitchRequired,
|
||||
);
|
||||
|
||||
const streamingState = useMemo(() => {
|
||||
if (toolCalls.some((tc) => tc.status === 'awaiting_approval')) {
|
||||
return StreamingState.WaitingForConfirmation;
|
||||
@@ -729,20 +715,6 @@ export const useGeminiStream = (
|
||||
return;
|
||||
}
|
||||
|
||||
// Handle vision switch requirement
|
||||
const visionSwitchResult = await handleVisionSwitch(
|
||||
queryToSend,
|
||||
userMessageTimestamp,
|
||||
options?.isContinuation || false,
|
||||
);
|
||||
|
||||
if (!visionSwitchResult.shouldProceed) {
|
||||
isSubmittingQueryRef.current = false;
|
||||
return;
|
||||
}
|
||||
|
||||
const finalQueryToSend = queryToSend;
|
||||
|
||||
if (!options?.isContinuation) {
|
||||
startNewPrompt();
|
||||
setThought(null); // Reset thought when starting a new prompt
|
||||
@@ -753,7 +725,7 @@ export const useGeminiStream = (
|
||||
|
||||
try {
|
||||
const stream = geminiClient.sendMessageStream(
|
||||
finalQueryToSend,
|
||||
queryToSend,
|
||||
abortSignal,
|
||||
prompt_id!,
|
||||
);
|
||||
@@ -764,8 +736,6 @@ export const useGeminiStream = (
|
||||
);
|
||||
|
||||
if (processingStatus === StreamProcessingStatus.UserCancelled) {
|
||||
// Restore original model if it was temporarily overridden
|
||||
restoreOriginalModel();
|
||||
isSubmittingQueryRef.current = false;
|
||||
return;
|
||||
}
|
||||
@@ -778,13 +748,7 @@ export const useGeminiStream = (
|
||||
loopDetectedRef.current = false;
|
||||
handleLoopDetectedEvent();
|
||||
}
|
||||
|
||||
// Restore original model if it was temporarily overridden
|
||||
restoreOriginalModel();
|
||||
} catch (error: unknown) {
|
||||
// Restore original model if it was temporarily overridden
|
||||
restoreOriginalModel();
|
||||
|
||||
if (error instanceof UnauthorizedError) {
|
||||
onAuthError();
|
||||
} else if (!isNodeError(error) || error.name !== 'AbortError') {
|
||||
@@ -822,8 +786,6 @@ export const useGeminiStream = (
|
||||
startNewPrompt,
|
||||
getPromptCount,
|
||||
handleLoopDetectedEvent,
|
||||
handleVisionSwitch,
|
||||
restoreOriginalModel,
|
||||
],
|
||||
);
|
||||
|
||||
@@ -949,13 +911,10 @@ export const useGeminiStream = (
|
||||
],
|
||||
);
|
||||
|
||||
const pendingHistoryItems = useMemo(
|
||||
() =>
|
||||
[pendingHistoryItemRef.current, pendingToolCallGroupDisplay].filter(
|
||||
(i) => i !== undefined && i !== null,
|
||||
),
|
||||
[pendingHistoryItemRef, pendingToolCallGroupDisplay],
|
||||
);
|
||||
const pendingHistoryItems = [
|
||||
pendingHistoryItemRef.current,
|
||||
pendingToolCallGroupDisplay,
|
||||
].filter((i) => i !== undefined && i !== null);
|
||||
|
||||
useEffect(() => {
|
||||
const saveRestorableToolCalls = async () => {
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
*/
|
||||
|
||||
import React from 'react';
|
||||
import { renderHook, act, waitFor } from '@testing-library/react';
|
||||
import { renderHook, act } from '@testing-library/react';
|
||||
import type { Key } from './useKeypress.js';
|
||||
import { useKeypress } from './useKeypress.js';
|
||||
import { KeypressProvider } from '../contexts/KeypressContext.js';
|
||||
@@ -55,8 +55,8 @@ vi.mock('readline', () => {
|
||||
class MockStdin extends EventEmitter {
|
||||
isTTY = true;
|
||||
setRawMode = vi.fn();
|
||||
override on = this.addListener;
|
||||
override removeListener = super.removeListener;
|
||||
on = this.addListener;
|
||||
removeListener = this.removeListener;
|
||||
write = vi.fn();
|
||||
resume = vi.fn();
|
||||
|
||||
@@ -106,33 +106,12 @@ describe('useKeypress', () => {
|
||||
let originalNodeVersion: string;
|
||||
|
||||
const wrapper = ({ children }: { children: React.ReactNode }) =>
|
||||
React.createElement(
|
||||
KeypressProvider,
|
||||
{
|
||||
kittyProtocolEnabled: false,
|
||||
pasteWoraround: false,
|
||||
},
|
||||
children,
|
||||
);
|
||||
|
||||
const wrapperWithWindowsWorkaround = ({
|
||||
children,
|
||||
}: {
|
||||
children: React.ReactNode;
|
||||
}) =>
|
||||
React.createElement(
|
||||
KeypressProvider,
|
||||
{
|
||||
kittyProtocolEnabled: false,
|
||||
pasteWoraround: true,
|
||||
},
|
||||
children,
|
||||
);
|
||||
React.createElement(KeypressProvider, null, children);
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
stdin = new MockStdin();
|
||||
(useStdin as ReturnType<typeof vi.fn>).mockReturnValue({
|
||||
(useStdin as vi.Mock).mockReturnValue({
|
||||
stdin,
|
||||
setRawMode: mockSetRawMode,
|
||||
});
|
||||
@@ -209,33 +188,34 @@ describe('useKeypress', () => {
|
||||
description: 'Modern Node (>= v20)',
|
||||
setup: () => setNodeVersion('20.0.0'),
|
||||
isLegacy: false,
|
||||
pasteWoraround: false,
|
||||
},
|
||||
{
|
||||
description: 'PasteWorkaround Environment Variable',
|
||||
description: 'Legacy Node (< v20)',
|
||||
setup: () => setNodeVersion('18.0.0'),
|
||||
isLegacy: true,
|
||||
},
|
||||
{
|
||||
description: 'Workaround Env Var',
|
||||
setup: () => {
|
||||
setNodeVersion('20.0.0');
|
||||
vi.stubEnv('PASTE_WORKAROUND', 'true');
|
||||
},
|
||||
isLegacy: false,
|
||||
pasteWoraround: true,
|
||||
isLegacy: true,
|
||||
},
|
||||
])('in $description', ({ setup, isLegacy, pasteWoraround }) => {
|
||||
])('in $description', ({ setup, isLegacy }) => {
|
||||
beforeEach(() => {
|
||||
setup();
|
||||
stdin.setLegacy(isLegacy);
|
||||
});
|
||||
|
||||
it('should process a paste as a single event', async () => {
|
||||
it('should process a paste as a single event', () => {
|
||||
renderHook(() => useKeypress(onKeypress, { isActive: true }), {
|
||||
wrapper: pasteWoraround ? wrapperWithWindowsWorkaround : wrapper,
|
||||
wrapper,
|
||||
});
|
||||
const pasteText = 'hello world';
|
||||
act(() => stdin.paste(pasteText));
|
||||
|
||||
await waitFor(() => {
|
||||
expect(onKeypress).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
expect(onKeypress).toHaveBeenCalledTimes(1);
|
||||
expect(onKeypress).toHaveBeenCalledWith({
|
||||
name: '',
|
||||
ctrl: false,
|
||||
@@ -246,59 +226,47 @@ describe('useKeypress', () => {
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle keypress interspersed with pastes', async () => {
|
||||
it('should handle keypress interspersed with pastes', () => {
|
||||
renderHook(() => useKeypress(onKeypress, { isActive: true }), {
|
||||
wrapper: pasteWoraround ? wrapperWithWindowsWorkaround : wrapper,
|
||||
wrapper,
|
||||
});
|
||||
|
||||
const keyA = { name: 'a', sequence: 'a' };
|
||||
act(() => stdin.pressKey(keyA));
|
||||
|
||||
await waitFor(() => {
|
||||
expect(onKeypress).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ ...keyA, paste: false }),
|
||||
);
|
||||
});
|
||||
expect(onKeypress).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ ...keyA, paste: false }),
|
||||
);
|
||||
|
||||
const pasteText = 'pasted';
|
||||
act(() => stdin.paste(pasteText));
|
||||
|
||||
await waitFor(() => {
|
||||
expect(onKeypress).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ paste: true, sequence: pasteText }),
|
||||
);
|
||||
});
|
||||
expect(onKeypress).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ paste: true, sequence: pasteText }),
|
||||
);
|
||||
|
||||
const keyB = { name: 'b', sequence: 'b' };
|
||||
act(() => stdin.pressKey(keyB));
|
||||
|
||||
await waitFor(() => {
|
||||
expect(onKeypress).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ ...keyB, paste: false }),
|
||||
);
|
||||
});
|
||||
expect(onKeypress).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ ...keyB, paste: false }),
|
||||
);
|
||||
|
||||
expect(onKeypress).toHaveBeenCalledTimes(3);
|
||||
});
|
||||
|
||||
it('should emit partial paste content if unmounted mid-paste', async () => {
|
||||
it('should emit partial paste content if unmounted mid-paste', () => {
|
||||
const { unmount } = renderHook(
|
||||
() => useKeypress(onKeypress, { isActive: true }),
|
||||
{
|
||||
wrapper: pasteWoraround ? wrapperWithWindowsWorkaround : wrapper,
|
||||
},
|
||||
{ wrapper },
|
||||
);
|
||||
const pasteText = 'incomplete paste';
|
||||
|
||||
act(() => stdin.startPaste(pasteText));
|
||||
|
||||
// No event should be fired yet for incomplete paste
|
||||
// No event should be fired yet.
|
||||
expect(onKeypress).not.toHaveBeenCalled();
|
||||
|
||||
// Unmounting should trigger the flush
|
||||
// Unmounting should trigger the flush.
|
||||
unmount();
|
||||
|
||||
// Both legacy and modern modes now flush partial paste content on unmount
|
||||
expect(onKeypress).toHaveBeenCalledTimes(1);
|
||||
expect(onKeypress).toHaveBeenCalledWith({
|
||||
name: '',
|
||||
|
||||
@@ -1,374 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
/* eslint-disable @typescript-eslint/no-explicit-any */
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import { renderHook, act } from '@testing-library/react';
|
||||
import type { Part, PartListUnion } from '@google/genai';
|
||||
import { AuthType, type Config } from '@qwen-code/qwen-code-core';
|
||||
import {
|
||||
shouldOfferVisionSwitch,
|
||||
processVisionSwitchOutcome,
|
||||
getVisionSwitchGuidanceMessage,
|
||||
useVisionAutoSwitch,
|
||||
} from './useVisionAutoSwitch.js';
|
||||
import { VisionSwitchOutcome } from '../components/ModelSwitchDialog.js';
|
||||
import { MessageType } from '../types.js';
|
||||
import { getDefaultVisionModel } from '../models/availableModels.js';
|
||||
|
||||
describe('useVisionAutoSwitch helpers', () => {
|
||||
describe('shouldOfferVisionSwitch', () => {
|
||||
it('returns false when authType is not QWEN_OAUTH', () => {
|
||||
const parts: PartListUnion = [
|
||||
{ inlineData: { mimeType: 'image/png', data: '...' } },
|
||||
];
|
||||
const result = shouldOfferVisionSwitch(
|
||||
parts,
|
||||
AuthType.USE_GEMINI,
|
||||
'qwen3-coder-plus',
|
||||
true,
|
||||
);
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
|
||||
it('returns false when current model is already a vision model', () => {
|
||||
const parts: PartListUnion = [
|
||||
{ inlineData: { mimeType: 'image/png', data: '...' } },
|
||||
];
|
||||
const result = shouldOfferVisionSwitch(
|
||||
parts,
|
||||
AuthType.QWEN_OAUTH,
|
||||
'qwen-vl-max-latest',
|
||||
true,
|
||||
);
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
|
||||
it('returns true when image parts exist, QWEN_OAUTH, and model is not vision', () => {
|
||||
const parts: PartListUnion = [
|
||||
{ text: 'hello' },
|
||||
{ inlineData: { mimeType: 'image/jpeg', data: '...' } },
|
||||
];
|
||||
const result = shouldOfferVisionSwitch(
|
||||
parts,
|
||||
AuthType.QWEN_OAUTH,
|
||||
'qwen3-coder-plus',
|
||||
true,
|
||||
);
|
||||
expect(result).toBe(true);
|
||||
});
|
||||
|
||||
it('detects image when provided as a single Part object (non-array)', () => {
|
||||
const singleImagePart: PartListUnion = {
|
||||
fileData: { mimeType: 'image/gif', fileUri: 'file://image.gif' },
|
||||
} as Part;
|
||||
const result = shouldOfferVisionSwitch(
|
||||
singleImagePart,
|
||||
AuthType.QWEN_OAUTH,
|
||||
'qwen3-coder-plus',
|
||||
true,
|
||||
);
|
||||
expect(result).toBe(true);
|
||||
});
|
||||
|
||||
it('returns false when parts contain no images', () => {
|
||||
const parts: PartListUnion = [{ text: 'just text' }];
|
||||
const result = shouldOfferVisionSwitch(
|
||||
parts,
|
||||
AuthType.QWEN_OAUTH,
|
||||
'qwen3-coder-plus',
|
||||
true,
|
||||
);
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
|
||||
it('returns false when parts is a plain string', () => {
|
||||
const parts: PartListUnion = 'plain text';
|
||||
const result = shouldOfferVisionSwitch(
|
||||
parts,
|
||||
AuthType.QWEN_OAUTH,
|
||||
'qwen3-coder-plus',
|
||||
true,
|
||||
);
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
|
||||
it('returns false when visionModelPreviewEnabled is false', () => {
|
||||
const parts: PartListUnion = [
|
||||
{ inlineData: { mimeType: 'image/png', data: '...' } },
|
||||
];
|
||||
const result = shouldOfferVisionSwitch(
|
||||
parts,
|
||||
AuthType.QWEN_OAUTH,
|
||||
'qwen3-coder-plus',
|
||||
false,
|
||||
);
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('processVisionSwitchOutcome', () => {
|
||||
it('maps SwitchOnce to a one-time model override', () => {
|
||||
const vl = getDefaultVisionModel();
|
||||
const result = processVisionSwitchOutcome(VisionSwitchOutcome.SwitchOnce);
|
||||
expect(result).toEqual({ modelOverride: vl });
|
||||
});
|
||||
|
||||
it('maps SwitchSessionToVL to a persistent session model', () => {
|
||||
const vl = getDefaultVisionModel();
|
||||
const result = processVisionSwitchOutcome(
|
||||
VisionSwitchOutcome.SwitchSessionToVL,
|
||||
);
|
||||
expect(result).toEqual({ persistSessionModel: vl });
|
||||
});
|
||||
|
||||
it('maps DisallowWithGuidance to showGuidance', () => {
|
||||
const result = processVisionSwitchOutcome(
|
||||
VisionSwitchOutcome.DisallowWithGuidance,
|
||||
);
|
||||
expect(result).toEqual({ showGuidance: true });
|
||||
});
|
||||
});
|
||||
|
||||
describe('getVisionSwitchGuidanceMessage', () => {
|
||||
it('returns the expected guidance message', () => {
|
||||
const vl = getDefaultVisionModel();
|
||||
const expected =
|
||||
'To use images with your query, you can:\n' +
|
||||
`• Use /model set ${vl} to switch to a vision-capable model\n` +
|
||||
'• Or remove the image and provide a text description instead';
|
||||
expect(getVisionSwitchGuidanceMessage()).toBe(expected);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('useVisionAutoSwitch hook', () => {
|
||||
type AddItemFn = (
|
||||
item: { type: MessageType; text: string },
|
||||
ts: number,
|
||||
) => any;
|
||||
|
||||
const createMockConfig = (authType: AuthType, initialModel: string) => {
|
||||
let currentModel = initialModel;
|
||||
const mockConfig: Partial<Config> = {
|
||||
getModel: vi.fn(() => currentModel),
|
||||
setModel: vi.fn((m: string) => {
|
||||
currentModel = m;
|
||||
}),
|
||||
getContentGeneratorConfig: vi.fn(() => ({
|
||||
authType,
|
||||
model: currentModel,
|
||||
apiKey: 'test-key',
|
||||
vertexai: false,
|
||||
})),
|
||||
};
|
||||
return mockConfig as Config;
|
||||
};
|
||||
|
||||
let addItem: AddItemFn;
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
addItem = vi.fn();
|
||||
});
|
||||
|
||||
it('returns shouldProceed=true immediately for continuations', async () => {
|
||||
const config = createMockConfig(AuthType.QWEN_OAUTH, 'qwen3-coder-plus');
|
||||
const { result } = renderHook(() =>
|
||||
useVisionAutoSwitch(config, addItem as any, true, vi.fn()),
|
||||
);
|
||||
|
||||
const parts: PartListUnion = [
|
||||
{ inlineData: { mimeType: 'image/png', data: '...' } },
|
||||
];
|
||||
let res: any;
|
||||
await act(async () => {
|
||||
res = await result.current.handleVisionSwitch(parts, Date.now(), true);
|
||||
});
|
||||
expect(res).toEqual({ shouldProceed: true });
|
||||
expect(addItem).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('does nothing when authType is not QWEN_OAUTH', async () => {
|
||||
const config = createMockConfig(AuthType.USE_GEMINI, 'qwen3-coder-plus');
|
||||
const onVisionSwitchRequired = vi.fn();
|
||||
const { result } = renderHook(() =>
|
||||
useVisionAutoSwitch(config, addItem as any, true, onVisionSwitchRequired),
|
||||
);
|
||||
|
||||
const parts: PartListUnion = [
|
||||
{ inlineData: { mimeType: 'image/png', data: '...' } },
|
||||
];
|
||||
let res: any;
|
||||
await act(async () => {
|
||||
res = await result.current.handleVisionSwitch(parts, 123, false);
|
||||
});
|
||||
expect(res).toEqual({ shouldProceed: true });
|
||||
expect(onVisionSwitchRequired).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('does nothing when there are no image parts', async () => {
|
||||
const config = createMockConfig(AuthType.QWEN_OAUTH, 'qwen3-coder-plus');
|
||||
const onVisionSwitchRequired = vi.fn();
|
||||
const { result } = renderHook(() =>
|
||||
useVisionAutoSwitch(config, addItem as any, true, onVisionSwitchRequired),
|
||||
);
|
||||
|
||||
const parts: PartListUnion = [{ text: 'no images here' }];
|
||||
let res: any;
|
||||
await act(async () => {
|
||||
res = await result.current.handleVisionSwitch(parts, 456, false);
|
||||
});
|
||||
expect(res).toEqual({ shouldProceed: true });
|
||||
expect(onVisionSwitchRequired).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('shows guidance and blocks when dialog returns showGuidance', async () => {
|
||||
const config = createMockConfig(AuthType.QWEN_OAUTH, 'qwen3-coder-plus');
|
||||
const onVisionSwitchRequired = vi
|
||||
.fn()
|
||||
.mockResolvedValue({ showGuidance: true });
|
||||
const { result } = renderHook(() =>
|
||||
useVisionAutoSwitch(config, addItem as any, true, onVisionSwitchRequired),
|
||||
);
|
||||
|
||||
const parts: PartListUnion = [
|
||||
{ inlineData: { mimeType: 'image/png', data: '...' } },
|
||||
];
|
||||
|
||||
const userTs = 1010;
|
||||
let res: any;
|
||||
await act(async () => {
|
||||
res = await result.current.handleVisionSwitch(parts, userTs, false);
|
||||
});
|
||||
|
||||
expect(addItem).toHaveBeenCalledWith(
|
||||
{ type: MessageType.INFO, text: getVisionSwitchGuidanceMessage() },
|
||||
userTs,
|
||||
);
|
||||
expect(res).toEqual({ shouldProceed: false });
|
||||
expect(config.setModel).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('applies a one-time override and returns originalModel, then restores', async () => {
|
||||
const initialModel = 'qwen3-coder-plus';
|
||||
const config = createMockConfig(AuthType.QWEN_OAUTH, initialModel);
|
||||
const onVisionSwitchRequired = vi
|
||||
.fn()
|
||||
.mockResolvedValue({ modelOverride: 'qwen-vl-max-latest' });
|
||||
const { result } = renderHook(() =>
|
||||
useVisionAutoSwitch(config, addItem as any, true, onVisionSwitchRequired),
|
||||
);
|
||||
|
||||
const parts: PartListUnion = [
|
||||
{ inlineData: { mimeType: 'image/png', data: '...' } },
|
||||
];
|
||||
|
||||
let res: any;
|
||||
await act(async () => {
|
||||
res = await result.current.handleVisionSwitch(parts, 2020, false);
|
||||
});
|
||||
|
||||
expect(res).toEqual({ shouldProceed: true, originalModel: initialModel });
|
||||
expect(config.setModel).toHaveBeenCalledWith('qwen-vl-max-latest');
|
||||
|
||||
// Now restore
|
||||
act(() => {
|
||||
result.current.restoreOriginalModel();
|
||||
});
|
||||
expect(config.setModel).toHaveBeenLastCalledWith(initialModel);
|
||||
});
|
||||
|
||||
it('persists session model when dialog requests persistence', async () => {
|
||||
const config = createMockConfig(AuthType.QWEN_OAUTH, 'qwen3-coder-plus');
|
||||
const onVisionSwitchRequired = vi
|
||||
.fn()
|
||||
.mockResolvedValue({ persistSessionModel: 'qwen-vl-max-latest' });
|
||||
const { result } = renderHook(() =>
|
||||
useVisionAutoSwitch(config, addItem as any, true, onVisionSwitchRequired),
|
||||
);
|
||||
|
||||
const parts: PartListUnion = [
|
||||
{ inlineData: { mimeType: 'image/png', data: '...' } },
|
||||
];
|
||||
|
||||
let res: any;
|
||||
await act(async () => {
|
||||
res = await result.current.handleVisionSwitch(parts, 3030, false);
|
||||
});
|
||||
|
||||
expect(res).toEqual({ shouldProceed: true });
|
||||
expect(config.setModel).toHaveBeenCalledWith('qwen-vl-max-latest');
|
||||
|
||||
// Restore should be a no-op since no one-time override was used
|
||||
act(() => {
|
||||
result.current.restoreOriginalModel();
|
||||
});
|
||||
// Last call should still be the persisted model set
|
||||
expect((config.setModel as any).mock.calls.pop()?.[0]).toBe(
|
||||
'qwen-vl-max-latest',
|
||||
);
|
||||
});
|
||||
|
||||
it('returns shouldProceed=true when dialog returns no special flags', async () => {
|
||||
const config = createMockConfig(AuthType.QWEN_OAUTH, 'qwen3-coder-plus');
|
||||
const onVisionSwitchRequired = vi.fn().mockResolvedValue({});
|
||||
const { result } = renderHook(() =>
|
||||
useVisionAutoSwitch(config, addItem as any, true, onVisionSwitchRequired),
|
||||
);
|
||||
|
||||
const parts: PartListUnion = [
|
||||
{ inlineData: { mimeType: 'image/png', data: '...' } },
|
||||
];
|
||||
let res: any;
|
||||
await act(async () => {
|
||||
res = await result.current.handleVisionSwitch(parts, 4040, false);
|
||||
});
|
||||
expect(res).toEqual({ shouldProceed: true });
|
||||
expect(config.setModel).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('blocks when dialog throws or is cancelled', async () => {
|
||||
const config = createMockConfig(AuthType.QWEN_OAUTH, 'qwen3-coder-plus');
|
||||
const onVisionSwitchRequired = vi.fn().mockRejectedValue(new Error('x'));
|
||||
const { result } = renderHook(() =>
|
||||
useVisionAutoSwitch(config, addItem as any, true, onVisionSwitchRequired),
|
||||
);
|
||||
|
||||
const parts: PartListUnion = [
|
||||
{ inlineData: { mimeType: 'image/png', data: '...' } },
|
||||
];
|
||||
let res: any;
|
||||
await act(async () => {
|
||||
res = await result.current.handleVisionSwitch(parts, 5050, false);
|
||||
});
|
||||
expect(res).toEqual({ shouldProceed: false });
|
||||
expect(config.setModel).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('does nothing when visionModelPreviewEnabled is false', async () => {
|
||||
const config = createMockConfig(AuthType.QWEN_OAUTH, 'qwen3-coder-plus');
|
||||
const onVisionSwitchRequired = vi.fn();
|
||||
const { result } = renderHook(() =>
|
||||
useVisionAutoSwitch(
|
||||
config,
|
||||
addItem as any,
|
||||
false,
|
||||
onVisionSwitchRequired,
|
||||
),
|
||||
);
|
||||
|
||||
const parts: PartListUnion = [
|
||||
{ inlineData: { mimeType: 'image/png', data: '...' } },
|
||||
];
|
||||
let res: any;
|
||||
await act(async () => {
|
||||
res = await result.current.handleVisionSwitch(parts, 6060, false);
|
||||
});
|
||||
expect(res).toEqual({ shouldProceed: true });
|
||||
expect(onVisionSwitchRequired).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
@@ -1,304 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { type PartListUnion, type Part } from '@google/genai';
|
||||
import { AuthType, type Config } from '@qwen-code/qwen-code-core';
|
||||
import { useCallback, useRef } from 'react';
|
||||
import { VisionSwitchOutcome } from '../components/ModelSwitchDialog.js';
|
||||
import {
|
||||
getDefaultVisionModel,
|
||||
isVisionModel,
|
||||
} from '../models/availableModels.js';
|
||||
import { MessageType } from '../types.js';
|
||||
import type { UseHistoryManagerReturn } from './useHistoryManager.js';
|
||||
import {
|
||||
isSupportedImageMimeType,
|
||||
getUnsupportedImageFormatWarning,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
|
||||
/**
|
||||
* Checks if a PartListUnion contains image parts
|
||||
*/
|
||||
function hasImageParts(parts: PartListUnion): boolean {
|
||||
if (typeof parts === 'string') {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (Array.isArray(parts)) {
|
||||
return parts.some((part) => {
|
||||
// Skip string parts
|
||||
if (typeof part === 'string') return false;
|
||||
return isImagePart(part);
|
||||
});
|
||||
}
|
||||
|
||||
// If it's a single Part (not a string), check if it's an image
|
||||
if (typeof parts === 'object') {
|
||||
return isImagePart(parts);
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a single Part is an image part
|
||||
*/
|
||||
function isImagePart(part: Part): boolean {
|
||||
// Check for inlineData with image mime type
|
||||
if ('inlineData' in part && part.inlineData?.mimeType?.startsWith('image/')) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// Check for fileData with image mime type
|
||||
if ('fileData' in part && part.fileData?.mimeType?.startsWith('image/')) {
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if image parts have supported formats and returns unsupported ones
|
||||
*/
|
||||
function checkImageFormatsSupport(parts: PartListUnion): {
|
||||
hasImages: boolean;
|
||||
hasUnsupportedFormats: boolean;
|
||||
unsupportedMimeTypes: string[];
|
||||
} {
|
||||
const unsupportedMimeTypes: string[] = [];
|
||||
let hasImages = false;
|
||||
|
||||
if (typeof parts === 'string') {
|
||||
return {
|
||||
hasImages: false,
|
||||
hasUnsupportedFormats: false,
|
||||
unsupportedMimeTypes: [],
|
||||
};
|
||||
}
|
||||
|
||||
const partsArray = Array.isArray(parts) ? parts : [parts];
|
||||
|
||||
for (const part of partsArray) {
|
||||
if (typeof part === 'string') continue;
|
||||
|
||||
let mimeType: string | undefined;
|
||||
|
||||
// Check inlineData
|
||||
if (
|
||||
'inlineData' in part &&
|
||||
part.inlineData?.mimeType?.startsWith('image/')
|
||||
) {
|
||||
hasImages = true;
|
||||
mimeType = part.inlineData.mimeType;
|
||||
}
|
||||
|
||||
// Check fileData
|
||||
if ('fileData' in part && part.fileData?.mimeType?.startsWith('image/')) {
|
||||
hasImages = true;
|
||||
mimeType = part.fileData.mimeType;
|
||||
}
|
||||
|
||||
// Check if the mime type is supported
|
||||
if (mimeType && !isSupportedImageMimeType(mimeType)) {
|
||||
unsupportedMimeTypes.push(mimeType);
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
hasImages,
|
||||
hasUnsupportedFormats: unsupportedMimeTypes.length > 0,
|
||||
unsupportedMimeTypes,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Determines if we should offer vision switch for the given parts, auth type, and current model
|
||||
*/
|
||||
export function shouldOfferVisionSwitch(
|
||||
parts: PartListUnion,
|
||||
authType: AuthType,
|
||||
currentModel: string,
|
||||
visionModelPreviewEnabled: boolean = false,
|
||||
): boolean {
|
||||
// Only trigger for qwen-oauth
|
||||
if (authType !== AuthType.QWEN_OAUTH) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// If vision model preview is disabled, never offer vision switch
|
||||
if (!visionModelPreviewEnabled) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// If current model is already a vision model, no need to switch
|
||||
if (isVisionModel(currentModel)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check if the current message contains image parts
|
||||
return hasImageParts(parts);
|
||||
}
|
||||
|
||||
/**
|
||||
* Interface for vision switch result
|
||||
*/
|
||||
export interface VisionSwitchResult {
|
||||
modelOverride?: string;
|
||||
persistSessionModel?: string;
|
||||
showGuidance?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Processes the vision switch outcome and returns the appropriate result
|
||||
*/
|
||||
export function processVisionSwitchOutcome(
|
||||
outcome: VisionSwitchOutcome,
|
||||
): VisionSwitchResult {
|
||||
const vlModelId = getDefaultVisionModel();
|
||||
|
||||
switch (outcome) {
|
||||
case VisionSwitchOutcome.SwitchOnce:
|
||||
return { modelOverride: vlModelId };
|
||||
|
||||
case VisionSwitchOutcome.SwitchSessionToVL:
|
||||
return { persistSessionModel: vlModelId };
|
||||
|
||||
case VisionSwitchOutcome.DisallowWithGuidance:
|
||||
return { showGuidance: true };
|
||||
|
||||
default:
|
||||
return { showGuidance: true };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the guidance message for when vision switch is disallowed
|
||||
*/
|
||||
export function getVisionSwitchGuidanceMessage(): string {
|
||||
const vlModelId = getDefaultVisionModel();
|
||||
return `To use images with your query, you can:
|
||||
• Use /model set ${vlModelId} to switch to a vision-capable model
|
||||
• Or remove the image and provide a text description instead`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Interface for vision switch handling result
|
||||
*/
|
||||
export interface VisionSwitchHandlingResult {
|
||||
shouldProceed: boolean;
|
||||
originalModel?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Custom hook for handling vision model auto-switching
|
||||
*/
|
||||
export function useVisionAutoSwitch(
|
||||
config: Config,
|
||||
addItem: UseHistoryManagerReturn['addItem'],
|
||||
visionModelPreviewEnabled: boolean = false,
|
||||
onVisionSwitchRequired?: (query: PartListUnion) => Promise<{
|
||||
modelOverride?: string;
|
||||
persistSessionModel?: string;
|
||||
showGuidance?: boolean;
|
||||
}>,
|
||||
) {
|
||||
const originalModelRef = useRef<string | null>(null);
|
||||
|
||||
const handleVisionSwitch = useCallback(
|
||||
async (
|
||||
query: PartListUnion,
|
||||
userMessageTimestamp: number,
|
||||
isContinuation: boolean,
|
||||
): Promise<VisionSwitchHandlingResult> => {
|
||||
// Skip vision switch handling for continuations or if no handler provided
|
||||
if (isContinuation || !onVisionSwitchRequired) {
|
||||
return { shouldProceed: true };
|
||||
}
|
||||
|
||||
const contentGeneratorConfig = config.getContentGeneratorConfig();
|
||||
|
||||
// Only handle qwen-oauth auth type
|
||||
if (contentGeneratorConfig?.authType !== AuthType.QWEN_OAUTH) {
|
||||
return { shouldProceed: true };
|
||||
}
|
||||
|
||||
// Check image format support first
|
||||
const formatCheck = checkImageFormatsSupport(query);
|
||||
|
||||
// If there are unsupported image formats, show warning
|
||||
if (formatCheck.hasUnsupportedFormats) {
|
||||
addItem(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: getUnsupportedImageFormatWarning(),
|
||||
},
|
||||
userMessageTimestamp,
|
||||
);
|
||||
// Continue processing but with warning shown
|
||||
}
|
||||
|
||||
// Check if vision switch is needed
|
||||
if (
|
||||
!shouldOfferVisionSwitch(
|
||||
query,
|
||||
contentGeneratorConfig.authType,
|
||||
config.getModel(),
|
||||
visionModelPreviewEnabled,
|
||||
)
|
||||
) {
|
||||
return { shouldProceed: true };
|
||||
}
|
||||
|
||||
try {
|
||||
const visionSwitchResult = await onVisionSwitchRequired(query);
|
||||
|
||||
if (visionSwitchResult.showGuidance) {
|
||||
// Show guidance and don't proceed with the request
|
||||
addItem(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: getVisionSwitchGuidanceMessage(),
|
||||
},
|
||||
userMessageTimestamp,
|
||||
);
|
||||
return { shouldProceed: false };
|
||||
}
|
||||
|
||||
if (visionSwitchResult.modelOverride) {
|
||||
// One-time model override
|
||||
originalModelRef.current = config.getModel();
|
||||
config.setModel(visionSwitchResult.modelOverride);
|
||||
return {
|
||||
shouldProceed: true,
|
||||
originalModel: originalModelRef.current,
|
||||
};
|
||||
} else if (visionSwitchResult.persistSessionModel) {
|
||||
// Persistent session model change
|
||||
config.setModel(visionSwitchResult.persistSessionModel);
|
||||
return { shouldProceed: true };
|
||||
}
|
||||
|
||||
return { shouldProceed: true };
|
||||
} catch (_error) {
|
||||
// If vision switch dialog was cancelled or errored, don't proceed
|
||||
return { shouldProceed: false };
|
||||
}
|
||||
},
|
||||
[config, addItem, visionModelPreviewEnabled, onVisionSwitchRequired],
|
||||
);
|
||||
|
||||
const restoreOriginalModel = useCallback(() => {
|
||||
if (originalModelRef.current) {
|
||||
config.setModel(originalModelRef.current);
|
||||
originalModelRef.current = null;
|
||||
}
|
||||
}, [config]);
|
||||
|
||||
return {
|
||||
handleVisionSwitch,
|
||||
restoreOriginalModel,
|
||||
};
|
||||
}
|
||||
@@ -1,52 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
export type AvailableModel = {
|
||||
id: string;
|
||||
label: string;
|
||||
isVision?: boolean;
|
||||
};
|
||||
|
||||
export const AVAILABLE_MODELS_QWEN: AvailableModel[] = [
|
||||
{ id: 'qwen3-coder-plus', label: 'qwen3-coder-plus' },
|
||||
{ id: 'qwen-vl-max-latest', label: 'qwen-vl-max', isVision: true },
|
||||
];
|
||||
|
||||
/**
|
||||
* Get available Qwen models filtered by vision model preview setting
|
||||
*/
|
||||
export function getFilteredQwenModels(
|
||||
visionModelPreviewEnabled: boolean,
|
||||
): AvailableModel[] {
|
||||
if (visionModelPreviewEnabled) {
|
||||
return AVAILABLE_MODELS_QWEN;
|
||||
}
|
||||
return AVAILABLE_MODELS_QWEN.filter((model) => !model.isVision);
|
||||
}
|
||||
|
||||
/**
|
||||
* Currently we use the single model of `OPENAI_MODEL` in the env.
|
||||
* In the future, after settings.json is updated, we will allow users to configure this themselves.
|
||||
*/
|
||||
export function getOpenAIAvailableModelFromEnv(): AvailableModel | null {
|
||||
const id = process.env['OPENAI_MODEL']?.trim();
|
||||
return id ? { id, label: id } : null;
|
||||
}
|
||||
|
||||
/**
|
||||
/**
|
||||
* Hard code the default vision model as a string literal,
|
||||
* until our coding model supports multimodal.
|
||||
*/
|
||||
export function getDefaultVisionModel(): string {
|
||||
return 'qwen-vl-max-latest';
|
||||
}
|
||||
|
||||
export function isVisionModel(modelId: string): boolean {
|
||||
return AVAILABLE_MODELS_QWEN.some(
|
||||
(model) => model.id === modelId && model.isVision,
|
||||
);
|
||||
}
|
||||
@@ -19,4 +19,3 @@ export {
|
||||
} from './src/telemetry/types.js';
|
||||
export { makeFakeConfig } from './src/test-utils/config.js';
|
||||
export * from './src/utils/pathReader.js';
|
||||
export * from './src/utils/request-tokenizer/supportedImageFormats.js';
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@qwen-code/qwen-code-core",
|
||||
"version": "0.0.11",
|
||||
"version": "0.0.12-nightly.0",
|
||||
"description": "Qwen Code Core",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
|
||||
@@ -238,7 +238,6 @@ export interface ConfigParameters {
|
||||
skipNextSpeakerCheck?: boolean;
|
||||
extensionManagement?: boolean;
|
||||
enablePromptCompletion?: boolean;
|
||||
skipLoopDetection?: boolean;
|
||||
}
|
||||
|
||||
export class Config {
|
||||
@@ -329,7 +328,6 @@ export class Config {
|
||||
private readonly skipNextSpeakerCheck: boolean;
|
||||
private readonly extensionManagement: boolean;
|
||||
private readonly enablePromptCompletion: boolean = false;
|
||||
private readonly skipLoopDetection: boolean;
|
||||
private initialized: boolean = false;
|
||||
readonly storage: Storage;
|
||||
private readonly fileExclusions: FileExclusions;
|
||||
@@ -412,9 +410,6 @@ export class Config {
|
||||
this.chatCompression = params.chatCompression;
|
||||
this.interactive = params.interactive ?? false;
|
||||
this.trustedFolder = params.trustedFolder;
|
||||
this.shouldUseNodePtyShell = params.shouldUseNodePtyShell ?? false;
|
||||
this.skipNextSpeakerCheck = params.skipNextSpeakerCheck ?? false;
|
||||
this.skipLoopDetection = params.skipLoopDetection ?? false;
|
||||
|
||||
// Web search
|
||||
this.tavilyApiKey = params.tavilyApiKey;
|
||||
@@ -521,18 +516,6 @@ export class Config {
|
||||
if (this.contentGeneratorConfig) {
|
||||
this.contentGeneratorConfig.model = newModel;
|
||||
}
|
||||
|
||||
// Reinitialize chat with updated configuration while preserving history
|
||||
const geminiClient = this.getGeminiClient();
|
||||
if (geminiClient && geminiClient.isInitialized()) {
|
||||
// Use async operation but don't await to avoid blocking
|
||||
geminiClient.reinitialize().catch((error) => {
|
||||
console.error(
|
||||
'Failed to reinitialize chat with updated config:',
|
||||
error,
|
||||
);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
isInFallbackMode(): boolean {
|
||||
@@ -934,10 +917,6 @@ export class Config {
|
||||
return this.enablePromptCompletion;
|
||||
}
|
||||
|
||||
getSkipLoopDetection(): boolean {
|
||||
return this.skipLoopDetection;
|
||||
}
|
||||
|
||||
async getGitService(): Promise<GitService> {
|
||||
if (!this.gitService) {
|
||||
this.gitService = new GitService(this.targetDir, this.storage);
|
||||
|
||||
@@ -105,7 +105,7 @@ export class Storage {
|
||||
}
|
||||
|
||||
getExtensionsConfigPath(): string {
|
||||
return path.join(this.getExtensionsDir(), 'qwen-extension.json');
|
||||
return path.join(this.getExtensionsDir(), 'gemini-extension.json');
|
||||
}
|
||||
|
||||
getHistoryFilePath(): string {
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -5,10 +5,9 @@
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import { OpenAIContentGenerator } from '../openaiContentGenerator/openaiContentGenerator.js';
|
||||
import { OpenAIContentGenerator } from '../openaiContentGenerator.js';
|
||||
import type { Config } from '../../config/config.js';
|
||||
import { AuthType } from '../contentGenerator.js';
|
||||
import type { OpenAICompatibleProvider } from '../openaiContentGenerator/provider/index.js';
|
||||
import OpenAI from 'openai';
|
||||
|
||||
// Mock OpenAI
|
||||
@@ -31,7 +30,6 @@ describe('OpenAIContentGenerator Timeout Handling', () => {
|
||||
let mockConfig: Config;
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
let mockOpenAIClient: any;
|
||||
let mockProvider: OpenAICompatibleProvider;
|
||||
|
||||
beforeEach(() => {
|
||||
// Reset mocks
|
||||
@@ -44,7 +42,6 @@ describe('OpenAIContentGenerator Timeout Handling', () => {
|
||||
mockConfig = {
|
||||
getContentGeneratorConfig: vi.fn().mockReturnValue({
|
||||
authType: 'openai',
|
||||
enableOpenAILogging: false,
|
||||
}),
|
||||
getCliVersion: vi.fn().mockReturnValue('1.0.0'),
|
||||
} as unknown as Config;
|
||||
@@ -56,34 +53,17 @@ describe('OpenAIContentGenerator Timeout Handling', () => {
|
||||
create: vi.fn(),
|
||||
},
|
||||
},
|
||||
embeddings: {
|
||||
create: vi.fn(),
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(OpenAI).mockImplementation(() => mockOpenAIClient);
|
||||
|
||||
// Create mock provider
|
||||
mockProvider = {
|
||||
buildHeaders: vi.fn().mockReturnValue({
|
||||
'User-Agent': 'QwenCode/1.0.0 (test; test)',
|
||||
}),
|
||||
buildClient: vi.fn().mockReturnValue(mockOpenAIClient),
|
||||
buildRequest: vi.fn().mockImplementation((req) => req),
|
||||
};
|
||||
|
||||
// Create generator instance
|
||||
const contentGeneratorConfig = {
|
||||
model: 'gpt-4',
|
||||
apiKey: 'test-key',
|
||||
authType: AuthType.USE_OPENAI,
|
||||
enableOpenAILogging: false,
|
||||
};
|
||||
generator = new OpenAIContentGenerator(
|
||||
contentGeneratorConfig,
|
||||
mockConfig,
|
||||
mockProvider,
|
||||
);
|
||||
generator = new OpenAIContentGenerator(contentGeneratorConfig, mockConfig);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
@@ -229,7 +209,7 @@ describe('OpenAIContentGenerator Timeout Handling', () => {
|
||||
await expect(
|
||||
generator.generateContentStream(request, 'test-prompt-id'),
|
||||
).rejects.toThrow(
|
||||
/Streaming request timeout after \d+s\. Try reducing input length or increasing timeout in config\./,
|
||||
/Streaming setup timeout after \d+s\. Try reducing input length or increasing timeout in config\./,
|
||||
);
|
||||
});
|
||||
|
||||
@@ -247,8 +227,12 @@ describe('OpenAIContentGenerator Timeout Handling', () => {
|
||||
} catch (error: unknown) {
|
||||
const errorMessage =
|
||||
error instanceof Error ? error.message : String(error);
|
||||
expect(errorMessage).toContain('Streaming timeout troubleshooting:');
|
||||
expect(errorMessage).toContain('Check network connectivity');
|
||||
expect(errorMessage).toContain(
|
||||
'Streaming setup timeout troubleshooting:',
|
||||
);
|
||||
expect(errorMessage).toContain(
|
||||
'Check network connectivity and firewall settings',
|
||||
);
|
||||
expect(errorMessage).toContain('Consider using non-streaming mode');
|
||||
}
|
||||
});
|
||||
@@ -262,21 +246,23 @@ describe('OpenAIContentGenerator Timeout Handling', () => {
|
||||
authType: AuthType.USE_OPENAI,
|
||||
baseUrl: 'http://localhost:8080',
|
||||
};
|
||||
new OpenAIContentGenerator(
|
||||
contentGeneratorConfig,
|
||||
mockConfig,
|
||||
mockProvider,
|
||||
);
|
||||
new OpenAIContentGenerator(contentGeneratorConfig, mockConfig);
|
||||
|
||||
// Verify provider buildClient was called
|
||||
expect(mockProvider.buildClient).toHaveBeenCalled();
|
||||
// Verify OpenAI client was created with timeout config
|
||||
expect(OpenAI).toHaveBeenCalledWith({
|
||||
apiKey: 'test-key',
|
||||
baseURL: 'http://localhost:8080',
|
||||
timeout: 120000,
|
||||
maxRetries: 3,
|
||||
defaultHeaders: {
|
||||
'User-Agent': expect.stringMatching(/^QwenCode/),
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should use custom timeout from config', () => {
|
||||
const customConfig = {
|
||||
getContentGeneratorConfig: vi.fn().mockReturnValue({
|
||||
enableOpenAILogging: false,
|
||||
}),
|
||||
getContentGeneratorConfig: vi.fn().mockReturnValue({}),
|
||||
getCliVersion: vi.fn().mockReturnValue('1.0.0'),
|
||||
} as unknown as Config;
|
||||
|
||||
@@ -288,31 +274,22 @@ describe('OpenAIContentGenerator Timeout Handling', () => {
|
||||
timeout: 300000,
|
||||
maxRetries: 5,
|
||||
};
|
||||
new OpenAIContentGenerator(contentGeneratorConfig, customConfig);
|
||||
|
||||
// Create a custom mock provider for this test
|
||||
const customMockProvider: OpenAICompatibleProvider = {
|
||||
buildHeaders: vi.fn().mockReturnValue({
|
||||
'User-Agent': 'QwenCode/1.0.0 (test; test)',
|
||||
}),
|
||||
buildClient: vi.fn().mockReturnValue(mockOpenAIClient),
|
||||
buildRequest: vi.fn().mockImplementation((req) => req),
|
||||
};
|
||||
|
||||
new OpenAIContentGenerator(
|
||||
contentGeneratorConfig,
|
||||
customConfig,
|
||||
customMockProvider,
|
||||
);
|
||||
|
||||
// Verify provider buildClient was called
|
||||
expect(customMockProvider.buildClient).toHaveBeenCalled();
|
||||
expect(OpenAI).toHaveBeenCalledWith({
|
||||
apiKey: 'test-key',
|
||||
baseURL: 'http://localhost:8080',
|
||||
timeout: 300000,
|
||||
maxRetries: 5,
|
||||
defaultHeaders: {
|
||||
'User-Agent': expect.stringMatching(/^QwenCode/),
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle missing timeout config gracefully', () => {
|
||||
const noTimeoutConfig = {
|
||||
getContentGeneratorConfig: vi.fn().mockReturnValue({
|
||||
enableOpenAILogging: false,
|
||||
}),
|
||||
getContentGeneratorConfig: vi.fn().mockReturnValue({}),
|
||||
getCliVersion: vi.fn().mockReturnValue('1.0.0'),
|
||||
} as unknown as Config;
|
||||
|
||||
@@ -322,24 +299,17 @@ describe('OpenAIContentGenerator Timeout Handling', () => {
|
||||
authType: AuthType.USE_OPENAI,
|
||||
baseUrl: 'http://localhost:8080',
|
||||
};
|
||||
new OpenAIContentGenerator(contentGeneratorConfig, noTimeoutConfig);
|
||||
|
||||
// Create a custom mock provider for this test
|
||||
const noTimeoutMockProvider: OpenAICompatibleProvider = {
|
||||
buildHeaders: vi.fn().mockReturnValue({
|
||||
'User-Agent': 'QwenCode/1.0.0 (test; test)',
|
||||
}),
|
||||
buildClient: vi.fn().mockReturnValue(mockOpenAIClient),
|
||||
buildRequest: vi.fn().mockImplementation((req) => req),
|
||||
};
|
||||
|
||||
new OpenAIContentGenerator(
|
||||
contentGeneratorConfig,
|
||||
noTimeoutConfig,
|
||||
noTimeoutMockProvider,
|
||||
);
|
||||
|
||||
// Verify provider buildClient was called
|
||||
expect(noTimeoutMockProvider.buildClient).toHaveBeenCalled();
|
||||
expect(OpenAI).toHaveBeenCalledWith({
|
||||
apiKey: 'test-key',
|
||||
baseURL: 'http://localhost:8080',
|
||||
timeout: 120000, // default
|
||||
maxRetries: 3, // default
|
||||
defaultHeaders: {
|
||||
'User-Agent': expect.stringMatching(/^QwenCode/),
|
||||
},
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -226,9 +226,6 @@ describe('Gemini Client (client.ts)', () => {
|
||||
vertexai: false,
|
||||
authType: AuthType.USE_GEMINI,
|
||||
};
|
||||
const mockSubagentManager = {
|
||||
listSubagents: vi.fn().mockResolvedValue([]),
|
||||
};
|
||||
const mockConfigObject = {
|
||||
getContentGeneratorConfig: vi
|
||||
.fn()
|
||||
@@ -263,8 +260,6 @@ describe('Gemini Client (client.ts)', () => {
|
||||
getCliVersion: vi.fn().mockReturnValue('1.0.0'),
|
||||
getChatCompression: vi.fn().mockReturnValue(undefined),
|
||||
getSkipNextSpeakerCheck: vi.fn().mockReturnValue(false),
|
||||
getSubagentManager: vi.fn().mockReturnValue(mockSubagentManager),
|
||||
getSkipLoopDetection: vi.fn().mockReturnValue(false),
|
||||
};
|
||||
const MockedConfig = vi.mocked(Config, true);
|
||||
MockedConfig.mockImplementation(
|
||||
@@ -441,8 +436,7 @@ describe('Gemini Client (client.ts)', () => {
|
||||
);
|
||||
});
|
||||
|
||||
/* We now use model in contentGeneratorConfig in most cases. */
|
||||
it.skip('should allow overriding model and config', async () => {
|
||||
it('should allow overriding model and config', async () => {
|
||||
const contents: Content[] = [
|
||||
{ role: 'user', parts: [{ text: 'hello' }] },
|
||||
];
|
||||
@@ -2297,100 +2291,6 @@ ${JSON.stringify(
|
||||
// Assert
|
||||
expect(mockCheckNextSpeaker).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('does not run loop checks when skipLoopDetection is true', async () => {
|
||||
// Arrange
|
||||
// Ensure config returns true for skipLoopDetection
|
||||
vi.spyOn(client['config'], 'getSkipLoopDetection').mockReturnValue(true);
|
||||
|
||||
// Replace loop detector with spies
|
||||
const ldMock = {
|
||||
turnStarted: vi.fn().mockResolvedValue(false),
|
||||
addAndCheck: vi.fn().mockReturnValue(false),
|
||||
reset: vi.fn(),
|
||||
};
|
||||
// @ts-expect-error override private for testing
|
||||
client['loopDetector'] = ldMock;
|
||||
|
||||
const mockStream = (async function* () {
|
||||
yield { type: 'content', value: 'Hello' };
|
||||
yield { type: 'content', value: 'World' };
|
||||
})();
|
||||
mockTurnRunFn.mockReturnValue(mockStream);
|
||||
|
||||
const mockChat: Partial<GeminiChat> = {
|
||||
addHistory: vi.fn(),
|
||||
getHistory: vi.fn().mockReturnValue([]),
|
||||
};
|
||||
client['chat'] = mockChat as GeminiChat;
|
||||
|
||||
const mockGenerator: Partial<ContentGenerator> = {
|
||||
countTokens: vi.fn().mockResolvedValue({ totalTokens: 0 }),
|
||||
generateContent: mockGenerateContentFn,
|
||||
};
|
||||
client['contentGenerator'] = mockGenerator as ContentGenerator;
|
||||
|
||||
// Act
|
||||
const stream = client.sendMessageStream(
|
||||
[{ text: 'Hi' }],
|
||||
new AbortController().signal,
|
||||
'prompt-id-loop-skip',
|
||||
);
|
||||
for await (const _ of stream) {
|
||||
// consume
|
||||
}
|
||||
|
||||
// Assert: methods not called due to skip
|
||||
const ld = client['loopDetector'] as unknown as {
|
||||
turnStarted: ReturnType<typeof vi.fn>;
|
||||
addAndCheck: ReturnType<typeof vi.fn>;
|
||||
};
|
||||
expect(ld.turnStarted).not.toHaveBeenCalled();
|
||||
expect(ld.addAndCheck).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('runs loop checks when skipLoopDetection is false', async () => {
|
||||
// Arrange
|
||||
vi.spyOn(client['config'], 'getSkipLoopDetection').mockReturnValue(false);
|
||||
|
||||
const turnStarted = vi.fn().mockResolvedValue(false);
|
||||
const addAndCheck = vi.fn().mockReturnValue(false);
|
||||
const reset = vi.fn();
|
||||
// @ts-expect-error override private for testing
|
||||
client['loopDetector'] = { turnStarted, addAndCheck, reset };
|
||||
|
||||
const mockStream = (async function* () {
|
||||
yield { type: 'content', value: 'Hello' };
|
||||
yield { type: 'content', value: 'World' };
|
||||
})();
|
||||
mockTurnRunFn.mockReturnValue(mockStream);
|
||||
|
||||
const mockChat: Partial<GeminiChat> = {
|
||||
addHistory: vi.fn(),
|
||||
getHistory: vi.fn().mockReturnValue([]),
|
||||
};
|
||||
client['chat'] = mockChat as GeminiChat;
|
||||
|
||||
const mockGenerator: Partial<ContentGenerator> = {
|
||||
countTokens: vi.fn().mockResolvedValue({ totalTokens: 0 }),
|
||||
generateContent: mockGenerateContentFn,
|
||||
};
|
||||
client['contentGenerator'] = mockGenerator as ContentGenerator;
|
||||
|
||||
// Act
|
||||
const stream = client.sendMessageStream(
|
||||
[{ text: 'Hi' }],
|
||||
new AbortController().signal,
|
||||
'prompt-id-loop-run',
|
||||
);
|
||||
for await (const _ of stream) {
|
||||
// consume
|
||||
}
|
||||
|
||||
// Assert
|
||||
expect(turnStarted).toHaveBeenCalledTimes(1);
|
||||
expect(addAndCheck).toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('generateContent', () => {
|
||||
@@ -2550,82 +2450,4 @@ ${JSON.stringify(
|
||||
expect(mockChat.setHistory).toHaveBeenCalledWith(historyWithThoughts);
|
||||
});
|
||||
});
|
||||
|
||||
describe('initialize', () => {
|
||||
it('should accept extraHistory parameter and pass it to startChat', async () => {
|
||||
const mockStartChat = vi.fn().mockResolvedValue({});
|
||||
client['startChat'] = mockStartChat;
|
||||
|
||||
const extraHistory = [
|
||||
{ role: 'user', parts: [{ text: 'Previous message' }] },
|
||||
{ role: 'model', parts: [{ text: 'Previous response' }] },
|
||||
];
|
||||
|
||||
const contentGeneratorConfig = {
|
||||
model: 'test-model',
|
||||
apiKey: 'test-key',
|
||||
vertexai: false,
|
||||
authType: AuthType.USE_GEMINI,
|
||||
};
|
||||
|
||||
await client.initialize(contentGeneratorConfig, extraHistory);
|
||||
|
||||
expect(mockStartChat).toHaveBeenCalledWith(extraHistory, 'test-model');
|
||||
});
|
||||
|
||||
it('should use empty array when no extraHistory is provided', async () => {
|
||||
const mockStartChat = vi.fn().mockResolvedValue({});
|
||||
client['startChat'] = mockStartChat;
|
||||
|
||||
const contentGeneratorConfig = {
|
||||
model: 'test-model',
|
||||
apiKey: 'test-key',
|
||||
vertexai: false,
|
||||
authType: AuthType.USE_GEMINI,
|
||||
};
|
||||
|
||||
await client.initialize(contentGeneratorConfig);
|
||||
|
||||
expect(mockStartChat).toHaveBeenCalledWith([], 'test-model');
|
||||
});
|
||||
});
|
||||
|
||||
describe('reinitialize', () => {
|
||||
it('should reinitialize with preserved user history', async () => {
|
||||
// Mock the initialize method
|
||||
const mockInitialize = vi.fn().mockResolvedValue(undefined);
|
||||
client['initialize'] = mockInitialize;
|
||||
|
||||
// Set up initial history with environment context + user messages
|
||||
const mockHistory = [
|
||||
{ role: 'user', parts: [{ text: 'Environment context' }] },
|
||||
{ role: 'model', parts: [{ text: 'Got it. Thanks for the context!' }] },
|
||||
{ role: 'user', parts: [{ text: 'User message 1' }] },
|
||||
{ role: 'model', parts: [{ text: 'Model response 1' }] },
|
||||
];
|
||||
|
||||
const mockChat = {
|
||||
getHistory: vi.fn().mockReturnValue(mockHistory),
|
||||
};
|
||||
client['chat'] = mockChat as unknown as GeminiChat;
|
||||
client['getHistory'] = vi.fn().mockReturnValue(mockHistory);
|
||||
|
||||
await client.reinitialize();
|
||||
|
||||
// Should call initialize with preserved user history (excluding first 2 env messages)
|
||||
expect(mockInitialize).toHaveBeenCalledWith(
|
||||
expect.any(Object), // contentGeneratorConfig
|
||||
[
|
||||
{ role: 'user', parts: [{ text: 'User message 1' }] },
|
||||
{ role: 'model', parts: [{ text: 'Model response 1' }] },
|
||||
],
|
||||
);
|
||||
});
|
||||
|
||||
it('should not throw error when chat is not initialized', async () => {
|
||||
client['chat'] = undefined;
|
||||
|
||||
await expect(client.reinitialize()).resolves.not.toThrow();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -29,7 +29,6 @@ import {
|
||||
makeChatCompressionEvent,
|
||||
NextSpeakerCheckEvent,
|
||||
} from '../telemetry/types.js';
|
||||
import { TaskTool } from '../tools/task.js';
|
||||
import {
|
||||
getDirectoryContextString,
|
||||
getEnvironmentContext,
|
||||
@@ -138,24 +137,13 @@ export class GeminiClient {
|
||||
this.lastPromptId = this.config.getSessionId();
|
||||
}
|
||||
|
||||
async initialize(
|
||||
contentGeneratorConfig: ContentGeneratorConfig,
|
||||
extraHistory?: Content[],
|
||||
) {
|
||||
async initialize(contentGeneratorConfig: ContentGeneratorConfig) {
|
||||
this.contentGenerator = await createContentGenerator(
|
||||
contentGeneratorConfig,
|
||||
this.config,
|
||||
this.config.getSessionId(),
|
||||
);
|
||||
/**
|
||||
* Always take the model from contentGeneratorConfig to initialize,
|
||||
* despite the `this.config.contentGeneratorConfig` is not updated yet because in
|
||||
* `Config` it will not be updated until the initialization is successful.
|
||||
*/
|
||||
this.chat = await this.startChat(
|
||||
extraHistory || [],
|
||||
contentGeneratorConfig.model,
|
||||
);
|
||||
this.chat = await this.startChat();
|
||||
}
|
||||
|
||||
getContentGenerator(): ContentGenerator {
|
||||
@@ -228,28 +216,6 @@ export class GeminiClient {
|
||||
this.chat = await this.startChat();
|
||||
}
|
||||
|
||||
/**
|
||||
* Reinitializes the chat with the current contentGeneratorConfig while preserving chat history.
|
||||
* This creates a new chat object using the existing history and updated configuration.
|
||||
* Should be called when configuration changes (model, auth, etc.) to ensure consistency.
|
||||
*/
|
||||
async reinitialize(): Promise<void> {
|
||||
if (!this.chat) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Preserve the current chat history (excluding environment context)
|
||||
const currentHistory = this.getHistory();
|
||||
// Remove the initial environment context (first 2 messages: user env + model acknowledgment)
|
||||
const userHistory = currentHistory.slice(2);
|
||||
|
||||
// Get current content generator config and reinitialize with preserved history
|
||||
const contentGeneratorConfig = this.config.getContentGeneratorConfig();
|
||||
if (contentGeneratorConfig) {
|
||||
await this.initialize(contentGeneratorConfig, userHistory);
|
||||
}
|
||||
}
|
||||
|
||||
async addDirectoryContext(): Promise<void> {
|
||||
if (!this.chat) {
|
||||
return;
|
||||
@@ -261,10 +227,7 @@ export class GeminiClient {
|
||||
});
|
||||
}
|
||||
|
||||
async startChat(
|
||||
extraHistory?: Content[],
|
||||
model?: string,
|
||||
): Promise<GeminiChat> {
|
||||
async startChat(extraHistory?: Content[]): Promise<GeminiChat> {
|
||||
this.forceFullIdeContext = true;
|
||||
this.hasFailedCompressionAttempt = false;
|
||||
const envParts = await getEnvironmentContext(this.config);
|
||||
@@ -284,13 +247,9 @@ export class GeminiClient {
|
||||
];
|
||||
try {
|
||||
const userMemory = this.config.getUserMemory();
|
||||
const systemInstruction = getCoreSystemPrompt(
|
||||
userMemory,
|
||||
{},
|
||||
model || this.config.getModel(),
|
||||
);
|
||||
const systemInstruction = getCoreSystemPrompt(userMemory);
|
||||
const generateContentConfigWithThinking = isThinkingSupported(
|
||||
model || this.config.getModel(),
|
||||
this.config.getModel(),
|
||||
)
|
||||
? {
|
||||
...this.generateContentConfig,
|
||||
@@ -496,8 +455,7 @@ export class GeminiClient {
|
||||
turns: number = MAX_TURNS,
|
||||
originalModel?: string,
|
||||
): AsyncGenerator<ServerGeminiStreamEvent, Turn> {
|
||||
const isNewPrompt = this.lastPromptId !== prompt_id;
|
||||
if (isNewPrompt) {
|
||||
if (this.lastPromptId !== prompt_id) {
|
||||
this.loopDetector.reset(prompt_id);
|
||||
this.lastPromptId = prompt_id;
|
||||
}
|
||||
@@ -530,11 +488,7 @@ export class GeminiClient {
|
||||
// Get all the content that would be sent in an API call
|
||||
const currentHistory = this.getChat().getHistory(true);
|
||||
const userMemory = this.config.getUserMemory();
|
||||
const systemPrompt = getCoreSystemPrompt(
|
||||
userMemory,
|
||||
{},
|
||||
this.config.getModel(),
|
||||
);
|
||||
const systemPrompt = getCoreSystemPrompt(userMemory);
|
||||
const environment = await getEnvironmentContext(this.config);
|
||||
|
||||
// Create a mock request content to count total tokens
|
||||
@@ -598,41 +552,19 @@ export class GeminiClient {
|
||||
this.forceFullIdeContext = false;
|
||||
}
|
||||
|
||||
if (isNewPrompt) {
|
||||
const taskTool = this.config.getToolRegistry().getTool(TaskTool.Name);
|
||||
const subagents = (
|
||||
await this.config.getSubagentManager().listSubagents()
|
||||
).filter((subagent) => subagent.level !== 'builtin');
|
||||
|
||||
if (taskTool && subagents.length > 0) {
|
||||
this.getChat().addHistory({
|
||||
role: 'user',
|
||||
parts: [
|
||||
{
|
||||
text: `<system-reminder>You have powerful specialized agents at your disposal, available agent types are: ${subagents.map((subagent) => subagent.name).join(', ')}. PROACTIVELY use the ${TaskTool.Name} tool to delegate user's task to appropriate agent when user's task matches agent capabilities. Ignore this message if user's task is not relevant to any agent. This message is for internal use only. Do not mention this to user in your response.</system-reminder>`,
|
||||
},
|
||||
],
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
const turn = new Turn(this.getChat(), prompt_id);
|
||||
|
||||
if (!this.config.getSkipLoopDetection()) {
|
||||
const loopDetected = await this.loopDetector.turnStarted(signal);
|
||||
if (loopDetected) {
|
||||
yield { type: GeminiEventType.LoopDetected };
|
||||
return turn;
|
||||
}
|
||||
const loopDetected = await this.loopDetector.turnStarted(signal);
|
||||
if (loopDetected) {
|
||||
yield { type: GeminiEventType.LoopDetected };
|
||||
return turn;
|
||||
}
|
||||
|
||||
const resultStream = turn.run(request, signal);
|
||||
for await (const event of resultStream) {
|
||||
if (!this.config.getSkipLoopDetection()) {
|
||||
if (this.loopDetector.addAndCheck(event)) {
|
||||
yield { type: GeminiEventType.LoopDetected };
|
||||
return turn;
|
||||
}
|
||||
if (this.loopDetector.addAndCheck(event)) {
|
||||
yield { type: GeminiEventType.LoopDetected };
|
||||
return turn;
|
||||
}
|
||||
yield event;
|
||||
if (event.type === GeminiEventType.Error) {
|
||||
@@ -688,18 +620,14 @@ export class GeminiClient {
|
||||
model?: string,
|
||||
config: GenerateContentConfig = {},
|
||||
): Promise<Record<string, unknown>> {
|
||||
/**
|
||||
* TODO: ensure `model` consistency among GeminiClient, GeminiChat, and ContentGenerator
|
||||
* `model` passed to generateContent is not respected as we always use contentGenerator
|
||||
* We should ignore model for now because some calls use `DEFAULT_GEMINI_FLASH_MODEL`
|
||||
* which is not available as `qwen3-coder-flash`
|
||||
*/
|
||||
const modelToUse = this.config.getModel() || DEFAULT_GEMINI_FLASH_MODEL;
|
||||
// Use current model from config instead of hardcoded Flash model
|
||||
const modelToUse =
|
||||
model || this.config.getModel() || DEFAULT_GEMINI_FLASH_MODEL;
|
||||
try {
|
||||
const userMemory = this.config.getUserMemory();
|
||||
const finalSystemInstruction = config.systemInstruction
|
||||
? getCustomSystemPrompt(config.systemInstruction, userMemory)
|
||||
: getCoreSystemPrompt(userMemory, {}, modelToUse);
|
||||
: getCoreSystemPrompt(userMemory);
|
||||
|
||||
const requestConfig = {
|
||||
abortSignal,
|
||||
@@ -790,7 +718,7 @@ export class GeminiClient {
|
||||
const userMemory = this.config.getUserMemory();
|
||||
const finalSystemInstruction = generationConfig.systemInstruction
|
||||
? getCustomSystemPrompt(generationConfig.systemInstruction, userMemory)
|
||||
: getCoreSystemPrompt(userMemory, {}, this.config.getModel());
|
||||
: getCoreSystemPrompt(userMemory);
|
||||
|
||||
const requestConfig: GenerateContentConfig = {
|
||||
abortSignal,
|
||||
|
||||
@@ -500,7 +500,7 @@ export class GeminiChat {
|
||||
if (error instanceof Error && error.message) {
|
||||
if (isSchemaDepthError(error.message)) return false;
|
||||
if (error.message.includes('429')) return true;
|
||||
if (error.message.match(/^5\d{2}/)) return true;
|
||||
if (error.message.match(/5\d{2}/)) return true;
|
||||
}
|
||||
return false;
|
||||
},
|
||||
|
||||
3511
packages/core/src/core/openaiContentGenerator.test.ts
Normal file
3511
packages/core/src/core/openaiContentGenerator.test.ts
Normal file
File diff suppressed because it is too large
Load Diff
1711
packages/core/src/core/openaiContentGenerator.ts
Normal file
1711
packages/core/src/core/openaiContentGenerator.ts
Normal file
File diff suppressed because it is too large
Load Diff
@@ -5,37 +5,6 @@
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
|
||||
// Mock the request tokenizer module BEFORE importing the class that uses it
|
||||
const mockTokenizer = {
|
||||
calculateTokens: vi.fn().mockResolvedValue({
|
||||
totalTokens: 50,
|
||||
breakdown: {
|
||||
textTokens: 50,
|
||||
imageTokens: 0,
|
||||
audioTokens: 0,
|
||||
otherTokens: 0,
|
||||
},
|
||||
processingTime: 1,
|
||||
}),
|
||||
dispose: vi.fn(),
|
||||
};
|
||||
|
||||
vi.mock('../../../utils/request-tokenizer/index.js', () => ({
|
||||
getDefaultTokenizer: vi.fn(() => mockTokenizer),
|
||||
DefaultRequestTokenizer: vi.fn(() => mockTokenizer),
|
||||
disposeDefaultTokenizer: vi.fn(),
|
||||
}));
|
||||
|
||||
// Mock tiktoken as well for completeness
|
||||
vi.mock('tiktoken', () => ({
|
||||
get_encoding: vi.fn(() => ({
|
||||
encode: vi.fn(() => new Array(50)), // Mock 50 tokens
|
||||
free: vi.fn(),
|
||||
})),
|
||||
}));
|
||||
|
||||
// Now import the modules that depend on the mocked modules
|
||||
import { OpenAIContentGenerator } from './openaiContentGenerator.js';
|
||||
import type { Config } from '../../config/config.js';
|
||||
import { AuthType } from '../contentGenerator.js';
|
||||
@@ -46,6 +15,14 @@ import type {
|
||||
import type { OpenAICompatibleProvider } from './provider/index.js';
|
||||
import type OpenAI from 'openai';
|
||||
|
||||
// Mock tiktoken
|
||||
vi.mock('tiktoken', () => ({
|
||||
get_encoding: vi.fn().mockReturnValue({
|
||||
encode: vi.fn().mockReturnValue(new Array(50)), // Mock 50 tokens
|
||||
free: vi.fn(),
|
||||
}),
|
||||
}));
|
||||
|
||||
describe('OpenAIContentGenerator (Refactored)', () => {
|
||||
let generator: OpenAIContentGenerator;
|
||||
let mockConfig: Config;
|
||||
|
||||
@@ -13,7 +13,6 @@ import type { PipelineConfig } from './pipeline.js';
|
||||
import { ContentGenerationPipeline } from './pipeline.js';
|
||||
import { DefaultTelemetryService } from './telemetryService.js';
|
||||
import { EnhancedErrorHandler } from './errorHandler.js';
|
||||
import { getDefaultTokenizer } from '../../utils/request-tokenizer/index.js';
|
||||
import type { ContentGeneratorConfig } from '../contentGenerator.js';
|
||||
|
||||
export class OpenAIContentGenerator implements ContentGenerator {
|
||||
@@ -72,30 +71,27 @@ export class OpenAIContentGenerator implements ContentGenerator {
|
||||
async countTokens(
|
||||
request: CountTokensParameters,
|
||||
): Promise<CountTokensResponse> {
|
||||
try {
|
||||
// Use the new high-performance request tokenizer
|
||||
const tokenizer = getDefaultTokenizer();
|
||||
const result = await tokenizer.calculateTokens(request, {
|
||||
textEncoding: 'cl100k_base', // Use GPT-4 encoding for consistency
|
||||
});
|
||||
// Use tiktoken for accurate token counting
|
||||
const content = JSON.stringify(request.contents);
|
||||
let totalTokens = 0;
|
||||
|
||||
return {
|
||||
totalTokens: result.totalTokens,
|
||||
};
|
||||
try {
|
||||
const { get_encoding } = await import('tiktoken');
|
||||
const encoding = get_encoding('cl100k_base'); // GPT-4 encoding, but estimate for qwen
|
||||
totalTokens = encoding.encode(content).length;
|
||||
encoding.free();
|
||||
} catch (error) {
|
||||
console.warn(
|
||||
'Failed to calculate tokens with new tokenizer, falling back to simple method:',
|
||||
'Failed to load tiktoken, falling back to character approximation:',
|
||||
error,
|
||||
);
|
||||
|
||||
// Fallback to original simple method
|
||||
const content = JSON.stringify(request.contents);
|
||||
const totalTokens = Math.ceil(content.length / 4); // Rough estimate: 1 token ≈ 4 characters
|
||||
|
||||
return {
|
||||
totalTokens,
|
||||
};
|
||||
// Fallback: rough approximation using character count
|
||||
totalTokens = Math.ceil(content.length / 4); // Rough estimate: 1 token ≈ 4 characters
|
||||
}
|
||||
|
||||
return {
|
||||
totalTokens,
|
||||
};
|
||||
}
|
||||
|
||||
async embedContent(
|
||||
|
||||
@@ -1105,164 +1105,5 @@ describe('ContentGenerationPipeline', () => {
|
||||
expect.any(Array),
|
||||
);
|
||||
});
|
||||
|
||||
it('should collect all OpenAI chunks for logging even when Gemini responses are filtered', async () => {
|
||||
// Create chunks that would produce empty Gemini responses (partial tool calls)
|
||||
const partialToolCallChunk1: OpenAI.Chat.ChatCompletionChunk = {
|
||||
id: 'chunk-1',
|
||||
object: 'chat.completion.chunk',
|
||||
created: Date.now(),
|
||||
model: 'test-model',
|
||||
choices: [
|
||||
{
|
||||
index: 0,
|
||||
delta: {
|
||||
tool_calls: [
|
||||
{
|
||||
index: 0,
|
||||
id: 'call_123',
|
||||
type: 'function',
|
||||
function: { name: 'test_function', arguments: '{"par' },
|
||||
},
|
||||
],
|
||||
},
|
||||
finish_reason: null,
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const partialToolCallChunk2: OpenAI.Chat.ChatCompletionChunk = {
|
||||
id: 'chunk-2',
|
||||
object: 'chat.completion.chunk',
|
||||
created: Date.now(),
|
||||
model: 'test-model',
|
||||
choices: [
|
||||
{
|
||||
index: 0,
|
||||
delta: {
|
||||
tool_calls: [
|
||||
{
|
||||
index: 0,
|
||||
function: { arguments: 'am": "value"}' },
|
||||
},
|
||||
],
|
||||
},
|
||||
finish_reason: null,
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const finishChunk: OpenAI.Chat.ChatCompletionChunk = {
|
||||
id: 'chunk-3',
|
||||
object: 'chat.completion.chunk',
|
||||
created: Date.now(),
|
||||
model: 'test-model',
|
||||
choices: [
|
||||
{
|
||||
index: 0,
|
||||
delta: {},
|
||||
finish_reason: 'tool_calls',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
// Mock empty Gemini responses for partial chunks (they get filtered)
|
||||
const emptyGeminiResponse1 = new GenerateContentResponse();
|
||||
emptyGeminiResponse1.candidates = [
|
||||
{
|
||||
content: { parts: [], role: 'model' },
|
||||
index: 0,
|
||||
safetyRatings: [],
|
||||
},
|
||||
];
|
||||
|
||||
const emptyGeminiResponse2 = new GenerateContentResponse();
|
||||
emptyGeminiResponse2.candidates = [
|
||||
{
|
||||
content: { parts: [], role: 'model' },
|
||||
index: 0,
|
||||
safetyRatings: [],
|
||||
},
|
||||
];
|
||||
|
||||
// Mock final Gemini response with tool call
|
||||
const finalGeminiResponse = new GenerateContentResponse();
|
||||
finalGeminiResponse.candidates = [
|
||||
{
|
||||
content: {
|
||||
parts: [
|
||||
{
|
||||
functionCall: {
|
||||
id: 'call_123',
|
||||
name: 'test_function',
|
||||
args: { param: 'value' },
|
||||
},
|
||||
},
|
||||
],
|
||||
role: 'model',
|
||||
},
|
||||
finishReason: FinishReason.STOP,
|
||||
index: 0,
|
||||
safetyRatings: [],
|
||||
},
|
||||
];
|
||||
|
||||
// Setup converter mocks
|
||||
(mockConverter.convertGeminiRequestToOpenAI as Mock).mockReturnValue([
|
||||
{ role: 'user', content: 'test' },
|
||||
]);
|
||||
(mockConverter.convertOpenAIChunkToGemini as Mock)
|
||||
.mockReturnValueOnce(emptyGeminiResponse1) // First partial chunk -> empty response
|
||||
.mockReturnValueOnce(emptyGeminiResponse2) // Second partial chunk -> empty response
|
||||
.mockReturnValueOnce(finalGeminiResponse); // Finish chunk -> complete response
|
||||
|
||||
// Mock stream
|
||||
const mockStream = {
|
||||
async *[Symbol.asyncIterator]() {
|
||||
yield partialToolCallChunk1;
|
||||
yield partialToolCallChunk2;
|
||||
yield finishChunk;
|
||||
},
|
||||
};
|
||||
|
||||
(mockClient.chat.completions.create as Mock).mockResolvedValue(
|
||||
mockStream,
|
||||
);
|
||||
|
||||
const request: GenerateContentParameters = {
|
||||
model: 'test-model',
|
||||
contents: [{ role: 'user', parts: [{ text: 'test' }] }],
|
||||
};
|
||||
|
||||
// Collect responses
|
||||
const responses: GenerateContentResponse[] = [];
|
||||
const resultGenerator = await pipeline.executeStream(
|
||||
request,
|
||||
'test-prompt-id',
|
||||
);
|
||||
for await (const response of resultGenerator) {
|
||||
responses.push(response);
|
||||
}
|
||||
|
||||
// Should only yield the final response (empty ones are filtered)
|
||||
expect(responses).toHaveLength(1);
|
||||
expect(responses[0]).toBe(finalGeminiResponse);
|
||||
|
||||
// Verify telemetry was called with ALL OpenAI chunks, including the filtered ones
|
||||
expect(mockTelemetryService.logStreamingSuccess).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
model: 'test-model',
|
||||
duration: expect.any(Number),
|
||||
userPromptId: 'test-prompt-id',
|
||||
authType: 'openai',
|
||||
}),
|
||||
[finalGeminiResponse], // Only the non-empty Gemini response
|
||||
expect.objectContaining({
|
||||
model: 'test-model',
|
||||
messages: [{ role: 'user', content: 'test' }],
|
||||
}),
|
||||
[partialToolCallChunk1, partialToolCallChunk2, finishChunk], // ALL OpenAI chunks
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -10,11 +10,14 @@ import {
|
||||
GenerateContentResponse,
|
||||
} from '@google/genai';
|
||||
import type { Config } from '../../config/config.js';
|
||||
import type { ContentGeneratorConfig } from '../contentGenerator.js';
|
||||
import type { OpenAICompatibleProvider } from './provider/index.js';
|
||||
import { type ContentGeneratorConfig } from '../contentGenerator.js';
|
||||
import { type OpenAICompatibleProvider } from './provider/index.js';
|
||||
import { OpenAIContentConverter } from './converter.js';
|
||||
import type { TelemetryService, RequestContext } from './telemetryService.js';
|
||||
import type { ErrorHandler } from './errorHandler.js';
|
||||
import {
|
||||
type TelemetryService,
|
||||
type RequestContext,
|
||||
} from './telemetryService.js';
|
||||
import { type ErrorHandler } from './errorHandler.js';
|
||||
|
||||
export interface PipelineConfig {
|
||||
cliConfig: Config;
|
||||
@@ -98,7 +101,7 @@ export class ContentGenerationPipeline {
|
||||
* 2. Filter empty responses
|
||||
* 3. Handle chunk merging for providers that send finishReason and usageMetadata separately
|
||||
* 4. Collect both formats for logging
|
||||
* 5. Handle success/error logging
|
||||
* 5. Handle success/error logging with original OpenAI format
|
||||
*/
|
||||
private async *processStreamWithLogging(
|
||||
stream: AsyncIterable<OpenAI.Chat.ChatCompletionChunk>,
|
||||
@@ -118,9 +121,6 @@ export class ContentGenerationPipeline {
|
||||
try {
|
||||
// Stage 2a: Convert and yield each chunk while preserving original
|
||||
for await (const chunk of stream) {
|
||||
// Always collect OpenAI chunks for logging, regardless of Gemini conversion result
|
||||
collectedOpenAIChunks.push(chunk);
|
||||
|
||||
const response = this.converter.convertOpenAIChunkToGemini(chunk);
|
||||
|
||||
// Stage 2b: Filter empty responses to avoid downstream issues
|
||||
@@ -135,7 +135,9 @@ export class ContentGenerationPipeline {
|
||||
// Stage 2c: Handle chunk merging for providers that send finishReason and usageMetadata separately
|
||||
const shouldYield = this.handleChunkMerging(
|
||||
response,
|
||||
chunk,
|
||||
collectedGeminiResponses,
|
||||
collectedOpenAIChunks,
|
||||
(mergedResponse) => {
|
||||
pendingFinishResponse = mergedResponse;
|
||||
},
|
||||
@@ -167,11 +169,19 @@ export class ContentGenerationPipeline {
|
||||
collectedOpenAIChunks,
|
||||
);
|
||||
} catch (error) {
|
||||
// Stage 2e: Stream failed - handle error and logging
|
||||
context.duration = Date.now() - context.startTime;
|
||||
|
||||
// Clear streaming tool calls on error to prevent data pollution
|
||||
this.converter.resetStreamingToolCalls();
|
||||
|
||||
// Use shared error handling logic
|
||||
await this.handleError(error, context, request);
|
||||
await this.config.telemetryService.logError(
|
||||
context,
|
||||
error,
|
||||
openaiRequest,
|
||||
);
|
||||
|
||||
this.config.errorHandler.handle(error, context, request);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -183,13 +193,17 @@ export class ContentGenerationPipeline {
|
||||
* finishReason and the most up-to-date usage information from any provider pattern.
|
||||
*
|
||||
* @param response Current Gemini response
|
||||
* @param chunk Current OpenAI chunk
|
||||
* @param collectedGeminiResponses Array to collect responses for logging
|
||||
* @param collectedOpenAIChunks Array to collect chunks for logging
|
||||
* @param setPendingFinish Callback to set pending finish response
|
||||
* @returns true if the response should be yielded, false if it should be held for merging
|
||||
*/
|
||||
private handleChunkMerging(
|
||||
response: GenerateContentResponse,
|
||||
chunk: OpenAI.Chat.ChatCompletionChunk,
|
||||
collectedGeminiResponses: GenerateContentResponse[],
|
||||
collectedOpenAIChunks: OpenAI.Chat.ChatCompletionChunk[],
|
||||
setPendingFinish: (response: GenerateContentResponse) => void,
|
||||
): boolean {
|
||||
const isFinishChunk = response.candidates?.[0]?.finishReason;
|
||||
@@ -203,6 +217,7 @@ export class ContentGenerationPipeline {
|
||||
if (isFinishChunk) {
|
||||
// This is a finish reason chunk
|
||||
collectedGeminiResponses.push(response);
|
||||
collectedOpenAIChunks.push(chunk);
|
||||
setPendingFinish(response);
|
||||
return false; // Don't yield yet, wait for potential subsequent chunks to merge
|
||||
} else if (hasPendingFinish) {
|
||||
@@ -224,6 +239,7 @@ export class ContentGenerationPipeline {
|
||||
// Update the collected responses with the merged response
|
||||
collectedGeminiResponses[collectedGeminiResponses.length - 1] =
|
||||
mergedResponse;
|
||||
collectedOpenAIChunks.push(chunk);
|
||||
|
||||
setPendingFinish(mergedResponse);
|
||||
return true; // Yield the merged response
|
||||
@@ -231,6 +247,7 @@ export class ContentGenerationPipeline {
|
||||
|
||||
// Normal chunk - collect and yield
|
||||
collectedGeminiResponses.push(response);
|
||||
collectedOpenAIChunks.push(chunk);
|
||||
return true;
|
||||
}
|
||||
|
||||
@@ -348,59 +365,25 @@ export class ContentGenerationPipeline {
|
||||
context.duration = Date.now() - context.startTime;
|
||||
return result;
|
||||
} catch (error) {
|
||||
// Use shared error handling logic
|
||||
return await this.handleError(
|
||||
error,
|
||||
context,
|
||||
context.duration = Date.now() - context.startTime;
|
||||
|
||||
// Log error
|
||||
const openaiRequest = await this.buildRequest(
|
||||
request,
|
||||
userPromptId,
|
||||
isStreaming,
|
||||
);
|
||||
await this.config.telemetryService.logError(
|
||||
context,
|
||||
error,
|
||||
openaiRequest,
|
||||
);
|
||||
|
||||
// Handle and throw enhanced error
|
||||
this.config.errorHandler.handle(error, context, request);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Shared error handling logic for both executeWithErrorHandling and processStreamWithLogging
|
||||
* This centralizes the common error processing steps to avoid duplication
|
||||
*/
|
||||
private async handleError(
|
||||
error: unknown,
|
||||
context: RequestContext,
|
||||
request: GenerateContentParameters,
|
||||
userPromptId?: string,
|
||||
isStreaming?: boolean,
|
||||
): Promise<never> {
|
||||
context.duration = Date.now() - context.startTime;
|
||||
|
||||
// Build request for logging (may fail, but we still want to log the error)
|
||||
let openaiRequest: OpenAI.Chat.ChatCompletionCreateParams;
|
||||
try {
|
||||
if (userPromptId !== undefined && isStreaming !== undefined) {
|
||||
openaiRequest = await this.buildRequest(
|
||||
request,
|
||||
userPromptId,
|
||||
isStreaming,
|
||||
);
|
||||
} else {
|
||||
// For processStreamWithLogging, we don't have userPromptId/isStreaming,
|
||||
// so create a minimal request
|
||||
openaiRequest = {
|
||||
model: this.contentGeneratorConfig.model,
|
||||
messages: [],
|
||||
};
|
||||
}
|
||||
} catch (_buildError) {
|
||||
// If we can't build the request, create a minimal one for logging
|
||||
openaiRequest = {
|
||||
model: this.contentGeneratorConfig.model,
|
||||
messages: [],
|
||||
};
|
||||
}
|
||||
|
||||
await this.config.telemetryService.logError(context, error, openaiRequest);
|
||||
this.config.errorHandler.handle(error, context, request);
|
||||
}
|
||||
|
||||
/**
|
||||
* Create request context with common properties
|
||||
*/
|
||||
|
||||
@@ -79,16 +79,6 @@ export class DashScopeOpenAICompatibleProvider
|
||||
messages = this.addDashScopeCacheControl(messages, cacheTarget);
|
||||
}
|
||||
|
||||
if (request.model.startsWith('qwen-vl')) {
|
||||
return {
|
||||
...request,
|
||||
messages,
|
||||
...(this.buildMetadata(userPromptId) || {}),
|
||||
/* @ts-expect-error dashscope exclusive */
|
||||
vl_high_resolution_images: true,
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
...request, // Preserve all original parameters including sampling params
|
||||
messages,
|
||||
|
||||
@@ -364,120 +364,6 @@ describe('URL matching with trailing slash compatibility', () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe('Model-specific tool call formats', () => {
|
||||
beforeEach(() => {
|
||||
vi.resetAllMocks();
|
||||
vi.stubEnv('SANDBOX', undefined);
|
||||
});
|
||||
|
||||
it('should use XML format for qwen3-coder model', () => {
|
||||
vi.mocked(isGitRepository).mockReturnValue(false);
|
||||
const prompt = getCoreSystemPrompt(undefined, undefined, 'qwen3-coder-7b');
|
||||
|
||||
// Should contain XML-style tool calls
|
||||
expect(prompt).toContain('<tool_call>');
|
||||
expect(prompt).toContain('<function=run_shell_command>');
|
||||
expect(prompt).toContain('<parameter=command>');
|
||||
expect(prompt).toContain('</function>');
|
||||
expect(prompt).toContain('</tool_call>');
|
||||
|
||||
// Should NOT contain bracket-style tool calls
|
||||
expect(prompt).not.toContain('[tool_call: run_shell_command for');
|
||||
|
||||
// Should NOT contain JSON-style tool calls
|
||||
expect(prompt).not.toContain('{"name": "run_shell_command"');
|
||||
|
||||
expect(prompt).toMatchSnapshot();
|
||||
});
|
||||
|
||||
it('should use JSON format for qwen-vl model', () => {
|
||||
vi.mocked(isGitRepository).mockReturnValue(false);
|
||||
const prompt = getCoreSystemPrompt(undefined, undefined, 'qwen-vl-max');
|
||||
|
||||
// Should contain JSON-style tool calls
|
||||
expect(prompt).toContain('<tool_call>');
|
||||
expect(prompt).toContain('{"name": "run_shell_command"');
|
||||
expect(prompt).toContain('"arguments": {"command": "node server.js &"}');
|
||||
expect(prompt).toContain('</tool_call>');
|
||||
|
||||
// Should NOT contain bracket-style tool calls
|
||||
expect(prompt).not.toContain('[tool_call: run_shell_command for');
|
||||
|
||||
// Should NOT contain XML-style tool calls with parameters
|
||||
expect(prompt).not.toContain('<function=run_shell_command>');
|
||||
expect(prompt).not.toContain('<parameter=command>');
|
||||
|
||||
expect(prompt).toMatchSnapshot();
|
||||
});
|
||||
|
||||
it('should use bracket format for generic models', () => {
|
||||
vi.mocked(isGitRepository).mockReturnValue(false);
|
||||
const prompt = getCoreSystemPrompt(undefined, undefined, 'gpt-4');
|
||||
|
||||
// Should contain bracket-style tool calls
|
||||
expect(prompt).toContain('[tool_call: run_shell_command for');
|
||||
expect(prompt).toContain('because it must run in the background]');
|
||||
|
||||
// Should NOT contain XML-style tool calls
|
||||
expect(prompt).not.toContain('<function=run_shell_command>');
|
||||
expect(prompt).not.toContain('<parameter=command>');
|
||||
|
||||
// Should NOT contain JSON-style tool calls
|
||||
expect(prompt).not.toContain('{"name": "run_shell_command"');
|
||||
|
||||
expect(prompt).toMatchSnapshot();
|
||||
});
|
||||
|
||||
it('should use bracket format when no model is specified', () => {
|
||||
vi.mocked(isGitRepository).mockReturnValue(false);
|
||||
const prompt = getCoreSystemPrompt();
|
||||
|
||||
// Should contain bracket-style tool calls (default behavior)
|
||||
expect(prompt).toContain('[tool_call: run_shell_command for');
|
||||
expect(prompt).toContain('because it must run in the background]');
|
||||
|
||||
// Should NOT contain XML or JSON formats
|
||||
expect(prompt).not.toContain('<function=run_shell_command>');
|
||||
expect(prompt).not.toContain('{"name": "run_shell_command"');
|
||||
|
||||
expect(prompt).toMatchSnapshot();
|
||||
});
|
||||
|
||||
it('should preserve model-specific formats with user memory', () => {
|
||||
vi.mocked(isGitRepository).mockReturnValue(false);
|
||||
const userMemory = 'User prefers concise responses.';
|
||||
const prompt = getCoreSystemPrompt(
|
||||
userMemory,
|
||||
undefined,
|
||||
'qwen3-coder-14b',
|
||||
);
|
||||
|
||||
// Should contain XML-style tool calls
|
||||
expect(prompt).toContain('<tool_call>');
|
||||
expect(prompt).toContain('<function=run_shell_command>');
|
||||
|
||||
// Should contain user memory with separator
|
||||
expect(prompt).toContain('---');
|
||||
expect(prompt).toContain('User prefers concise responses.');
|
||||
|
||||
expect(prompt).toMatchSnapshot();
|
||||
});
|
||||
|
||||
it('should preserve model-specific formats with sandbox environment', () => {
|
||||
vi.stubEnv('SANDBOX', 'true');
|
||||
vi.mocked(isGitRepository).mockReturnValue(false);
|
||||
const prompt = getCoreSystemPrompt(undefined, undefined, 'qwen-vl-plus');
|
||||
|
||||
// Should contain JSON-style tool calls
|
||||
expect(prompt).toContain('{"name": "run_shell_command"');
|
||||
|
||||
// Should contain sandbox instructions
|
||||
expect(prompt).toContain('# Sandbox');
|
||||
|
||||
expect(prompt).toMatchSnapshot();
|
||||
});
|
||||
});
|
||||
|
||||
describe('getCustomSystemPrompt', () => {
|
||||
it('should handle string custom instruction without user memory', () => {
|
||||
const customInstruction =
|
||||
|
||||
@@ -7,10 +7,18 @@
|
||||
import path from 'node:path';
|
||||
import fs from 'node:fs';
|
||||
import os from 'node:os';
|
||||
import { ToolNames } from '../tools/tool-names.js';
|
||||
import { EditTool } from '../tools/edit.js';
|
||||
import { GlobTool } from '../tools/glob.js';
|
||||
import { GrepTool } from '../tools/grep.js';
|
||||
import { ReadFileTool } from '../tools/read-file.js';
|
||||
import { ReadManyFilesTool } from '../tools/read-many-files.js';
|
||||
import { ShellTool } from '../tools/shell.js';
|
||||
import { WriteFileTool } from '../tools/write-file.js';
|
||||
import process from 'node:process';
|
||||
import { isGitRepository } from '../utils/gitUtils.js';
|
||||
import { GEMINI_CONFIG_DIR } from '../tools/memoryTool.js';
|
||||
import { MemoryTool, GEMINI_CONFIG_DIR } from '../tools/memoryTool.js';
|
||||
import { TodoWriteTool } from '../tools/todoWrite.js';
|
||||
import { TaskTool } from '../tools/task.js';
|
||||
import type { GenerateContentConfig } from '@google/genai';
|
||||
|
||||
export interface ModelTemplateMapping {
|
||||
@@ -83,7 +91,6 @@ export function getCustomSystemPrompt(
|
||||
export function getCoreSystemPrompt(
|
||||
userMemory?: string,
|
||||
config?: SystemPromptConfig,
|
||||
model?: string,
|
||||
): string {
|
||||
// if GEMINI_SYSTEM_MD is set (and not 0|false), override system prompt from file
|
||||
// default path is .gemini/system.md but can be modified via custom path in GEMINI_SYSTEM_MD
|
||||
@@ -170,11 +177,11 @@ You are Qwen Code, an interactive CLI agent developed by Alibaba Group, speciali
|
||||
- **Proactiveness:** Fulfill the user's request thoroughly, including reasonable, directly implied follow-up actions.
|
||||
- **Confirm Ambiguity/Expansion:** Do not take significant actions beyond the clear scope of the request without confirming with the user. If asked *how* to do something, explain first, don't just do it.
|
||||
- **Explaining Changes:** After completing a code modification or file operation *do not* provide summaries unless asked.
|
||||
- **Path Construction:** Before using any file system tool (e.g., ${ToolNames.READ_FILE}' or '${ToolNames.WRITE_FILE}'), you must construct the full absolute path for the file_path argument. Always combine the absolute path of the project's root directory with the file's path relative to the root. For example, if the project root is /path/to/project/ and the file is foo/bar/baz.txt, the final path you must use is /path/to/project/foo/bar/baz.txt. If the user provides a relative path, you must resolve it against the root directory to create an absolute path.
|
||||
- **Path Construction:** Before using any file system tool (e.g., ${ReadFileTool.Name}' or '${WriteFileTool.Name}'), you must construct the full absolute path for the file_path argument. Always combine the absolute path of the project's root directory with the file's path relative to the root. For example, if the project root is /path/to/project/ and the file is foo/bar/baz.txt, the final path you must use is /path/to/project/foo/bar/baz.txt. If the user provides a relative path, you must resolve it against the root directory to create an absolute path.
|
||||
- **Do Not revert changes:** Do not revert changes to the codebase unless asked to do so by the user. Only revert changes made by you if they have resulted in an error or if the user has explicitly asked you to revert the changes.
|
||||
|
||||
# Task Management
|
||||
You have access to the ${ToolNames.TODO_WRITE} tool to help you manage and plan tasks. Use these tools VERY frequently to ensure that you are tracking your tasks and giving the user visibility into your progress.
|
||||
You have access to the ${TodoWriteTool.Name} tool to help you manage and plan tasks. Use these tools VERY frequently to ensure that you are tracking your tasks and giving the user visibility into your progress.
|
||||
These tools are also EXTREMELY helpful for planning tasks, and for breaking down larger complex tasks into smaller steps. If you do not use this tool when planning, you may forget to do important tasks - and that is unacceptable.
|
||||
|
||||
It is critical that you mark todos as completed as soon as you are done with a task. Do not batch up multiple tasks before marking them as completed.
|
||||
@@ -183,13 +190,13 @@ Examples:
|
||||
|
||||
<example>
|
||||
user: Run the build and fix any type errors
|
||||
assistant: I'm going to use the ${ToolNames.TODO_WRITE} tool to write the following items to the todo list:
|
||||
assistant: I'm going to use the ${TodoWriteTool.Name} tool to write the following items to the todo list:
|
||||
- Run the build
|
||||
- Fix any type errors
|
||||
|
||||
I'm now going to run the build using Bash.
|
||||
|
||||
Looks like I found 10 type errors. I'm going to use the ${ToolNames.TODO_WRITE} tool to write 10 items to the todo list.
|
||||
Looks like I found 10 type errors. I'm going to use the ${TodoWriteTool.Name} tool to write 10 items to the todo list.
|
||||
|
||||
marking the first todo as in_progress
|
||||
|
||||
@@ -204,7 +211,7 @@ In the above example, the assistant completes all the tasks, including the 10 er
|
||||
<example>
|
||||
user: Help me write a new feature that allows users to track their usage metrics and export them to various formats
|
||||
|
||||
A: I'll help you implement a usage metrics tracking and export feature. Let me first use the ${ToolNames.TODO_WRITE} tool to plan this task.
|
||||
A: I'll help you implement a usage metrics tracking and export feature. Let me first use the ${TodoWriteTool.Name} tool to plan this task.
|
||||
Adding the following todos to the todo list:
|
||||
1. Research existing metrics tracking in the codebase
|
||||
2. Design the metrics collection system
|
||||
@@ -225,8 +232,8 @@ I've found some existing telemetry code. Let me mark the first todo as in_progre
|
||||
|
||||
## Software Engineering Tasks
|
||||
When requested to perform tasks like fixing bugs, adding features, refactoring, or explaining code, follow this iterative approach:
|
||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the '${ToolNames.TODO_WRITE}' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use '${ToolNames.GREP}', '${ToolNames.GLOB}', '${ToolNames.READ_FILE}', and '${ToolNames.READ_MANY_FILES}' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., '${ToolNames.EDIT}', '${ToolNames.WRITE_FILE}' '${ToolNames.SHELL}' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||
- **Plan:** After understanding the user's request, create an initial plan based on your existing knowledge and any immediately obvious context. Use the '${TodoWriteTool.Name}' tool to capture this rough plan for complex or multi-step work. Don't wait for complete understanding - start with what you know.
|
||||
- **Implement:** Begin implementing the plan while gathering additional context as needed. Use '${GrepTool.Name}', '${GlobTool.Name}', '${ReadFileTool.Name}', and '${ReadManyFilesTool.Name}' tools strategically when you encounter specific unknowns during implementation. Use the available tools (e.g., '${EditTool.Name}', '${WriteFileTool.Name}' '${ShellTool.Name}' ...) to act on the plan, strictly adhering to the project's established conventions (detailed under 'Core Mandates').
|
||||
- **Adapt:** As you discover new information or encounter obstacles, update your plan and todos accordingly. Mark todos as in_progress when starting and completed when finishing each task. Add new todos if the scope expands. Refine your approach based on what you learn.
|
||||
- **Verify (Tests):** If applicable and feasible, verify the changes using the project's testing procedures. Identify the correct test commands and frameworks by examining 'README' files, build/package configuration (e.g., 'package.json'), or existing test execution patterns. NEVER assume standard test commands.
|
||||
- **Verify (Standards):** VERY IMPORTANT: After making code changes, execute the project-specific build, linting and type-checking commands (e.g., 'tsc', 'npm run lint', 'ruff check .') that you have identified for this project (or obtained from the user). This ensures code quality and adherence to standards. If unsure about these commands, you can ask the user if they'd like you to run them and if so how to.
|
||||
@@ -235,11 +242,11 @@ When requested to perform tasks like fixing bugs, adding features, refactoring,
|
||||
|
||||
- Tool results and user messages may include <system-reminder> tags. <system-reminder> tags contain useful information and reminders. They are NOT part of the user's provided input or the tool result.
|
||||
|
||||
IMPORTANT: Always use the ${ToolNames.TODO_WRITE} tool to plan and track tasks throughout the conversation.
|
||||
IMPORTANT: Always use the ${TodoWriteTool.Name} tool to plan and track tasks throughout the conversation.
|
||||
|
||||
## New Applications
|
||||
|
||||
**Goal:** Autonomously implement and deliver a visually appealing, substantially complete, and functional prototype. Utilize all tools at your disposal to implement the application. Some tools you may especially find useful are '${ToolNames.WRITE_FILE}', '${ToolNames.EDIT}' and '${ToolNames.SHELL}'.
|
||||
**Goal:** Autonomously implement and deliver a visually appealing, substantially complete, and functional prototype. Utilize all tools at your disposal to implement the application. Some tools you may especially find useful are '${WriteFileTool.Name}', '${EditTool.Name}' and '${ShellTool.Name}'.
|
||||
|
||||
1. **Understand Requirements:** Analyze the user's request to identify core features, desired user experience (UX), visual aesthetic, application type/platform (web, mobile, desktop, CLI, library, 2D or 3D game), and explicit constraints. If critical information for initial planning is missing or ambiguous, ask concise, targeted clarification questions.
|
||||
2. **Propose Plan:** Formulate an internal development plan. Present a clear, concise, high-level summary to the user. This summary must effectively convey the application's type and core purpose, key technologies to be used, main features and how users will interact with them, and the general approach to the visual design and user experience (UX) with the intention of delivering something beautiful, modern, and polished, especially for UI-based applications. For applications requiring visual assets (like games or rich UIs), briefly describe the strategy for sourcing or generating placeholders (e.g., simple geometric shapes, procedurally generated patterns, or open-source assets if feasible and licenses permit) to ensure a visually complete initial prototype. Ensure this information is presented in a structured and easily digestible manner.
|
||||
@@ -252,7 +259,7 @@ IMPORTANT: Always use the ${ToolNames.TODO_WRITE} tool to plan and track tasks t
|
||||
- **3d Games:** HTML/CSS/JavaScript with Three.js.
|
||||
- **2d Games:** HTML/CSS/JavaScript.
|
||||
3. **User Approval:** Obtain user approval for the proposed plan.
|
||||
4. **Implementation:** Use the '${ToolNames.TODO_WRITE}' tool to convert the approved plan into a structured todo list with specific, actionable tasks, then autonomously implement each task utilizing all available tools. When starting ensure you scaffold the application using '${ToolNames.SHELL}' for commands like 'npm init', 'npx create-react-app'. Aim for full scope completion. Proactively create or source necessary placeholder assets (e.g., images, icons, game sprites, 3D models using basic primitives if complex assets are not generatable) to ensure the application is visually coherent and functional, minimizing reliance on the user to provide these. If the model can generate simple assets (e.g., a uniformly colored square sprite, a simple 3D cube), it should do so. Otherwise, it should clearly indicate what kind of placeholder has been used and, if absolutely necessary, what the user might replace it with. Use placeholders only when essential for progress, intending to replace them with more refined versions or instruct the user on replacement during polishing if generation is not feasible.
|
||||
4. **Implementation:** Use the '${TodoWriteTool.Name}' tool to convert the approved plan into a structured todo list with specific, actionable tasks, then autonomously implement each task utilizing all available tools. When starting ensure you scaffold the application using '${ShellTool.Name}' for commands like 'npm init', 'npx create-react-app'. Aim for full scope completion. Proactively create or source necessary placeholder assets (e.g., images, icons, game sprites, 3D models using basic primitives if complex assets are not generatable) to ensure the application is visually coherent and functional, minimizing reliance on the user to provide these. If the model can generate simple assets (e.g., a uniformly colored square sprite, a simple 3D cube), it should do so. Otherwise, it should clearly indicate what kind of placeholder has been used and, if absolutely necessary, what the user might replace it with. Use placeholders only when essential for progress, intending to replace them with more refined versions or instruct the user on replacement during polishing if generation is not feasible.
|
||||
5. **Verify:** Review work against the original request, the approved plan. Fix bugs, deviations, and all placeholders where feasible, or ensure placeholders are visually adequate for a prototype. Ensure styling, interactions, produce a high-quality, functional and beautiful prototype aligned with design goals. Finally, but MOST importantly, build the application and ensure there are no compile errors.
|
||||
6. **Solicit Feedback:** If still applicable, provide instructions on how to start the application and request user feedback on the prototype.
|
||||
|
||||
@@ -268,18 +275,18 @@ IMPORTANT: Always use the ${ToolNames.TODO_WRITE} tool to plan and track tasks t
|
||||
- **Handling Inability:** If unable/unwilling to fulfill a request, state so briefly (1-2 sentences) without excessive justification. Offer alternatives if appropriate.
|
||||
|
||||
## Security and Safety Rules
|
||||
- **Explain Critical Commands:** Before executing commands with '${ToolNames.SHELL}' that modify the file system, codebase, or system state, you *must* provide a brief explanation of the command's purpose and potential impact. Prioritize user understanding and safety. You should not ask permission to use the tool; the user will be presented with a confirmation dialogue upon use (you do not need to tell them this).
|
||||
- **Explain Critical Commands:** Before executing commands with '${ShellTool.Name}' that modify the file system, codebase, or system state, you *must* provide a brief explanation of the command's purpose and potential impact. Prioritize user understanding and safety. You should not ask permission to use the tool; the user will be presented with a confirmation dialogue upon use (you do not need to tell them this).
|
||||
- **Security First:** Always apply security best practices. Never introduce code that exposes, logs, or commits secrets, API keys, or other sensitive information.
|
||||
|
||||
## Tool Usage
|
||||
- **File Paths:** Always use absolute paths when referring to files with tools like '${ToolNames.READ_FILE}' or '${ToolNames.WRITE_FILE}'. Relative paths are not supported. You must provide an absolute path.
|
||||
- **File Paths:** Always use absolute paths when referring to files with tools like '${ReadFileTool.Name}' or '${WriteFileTool.Name}'. Relative paths are not supported. You must provide an absolute path.
|
||||
- **Parallelism:** Execute multiple independent tool calls in parallel when feasible (i.e. searching the codebase).
|
||||
- **Command Execution:** Use the '${ToolNames.SHELL}' tool for running shell commands, remembering the safety rule to explain modifying commands first.
|
||||
- **Command Execution:** Use the '${ShellTool.Name}' tool for running shell commands, remembering the safety rule to explain modifying commands first.
|
||||
- **Background Processes:** Use background processes (via \`&\`) for commands that are unlikely to stop on their own, e.g. \`node server.js &\`. If unsure, ask the user.
|
||||
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
|
||||
- **Task Management:** Use the '${ToolNames.TODO_WRITE}' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the '${ToolNames.TASK}' tool in order to reduce context usage. You should proactively use the '${ToolNames.TASK}' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the '${ToolNames.MEMORY}' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Task Management:** Use the '${TodoWriteTool.Name}' tool proactively for complex, multi-step tasks to track progress and provide visibility to users. This tool helps organize work systematically and ensures no requirements are missed.
|
||||
- **Subagent Delegation:** When doing file search, prefer to use the '${TaskTool.Name}' tool in order to reduce context usage. You should proactively use the '${TaskTool.Name}' tool with specialized agents when the task at hand matches the agent's description.
|
||||
- **Remembering Facts:** Use the '${MemoryTool.Name}' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
|
||||
- **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward.
|
||||
|
||||
## Interaction Details
|
||||
@@ -331,10 +338,157 @@ ${(function () {
|
||||
return '';
|
||||
})()}
|
||||
|
||||
${getToolCallExamples(model || '')}
|
||||
# Examples (Illustrating Tone and Workflow)
|
||||
<example>
|
||||
user: 1 + 2
|
||||
model: 3
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: is 13 a prime number?
|
||||
model: true
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: start the server implemented in server.js
|
||||
model:
|
||||
<tool_call>
|
||||
<function=run_shell_command>
|
||||
<parameter=command>
|
||||
node server.js &
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Refactor the auth logic in src/auth.py to use the requests library instead of urllib.
|
||||
model: Okay, I can refactor 'src/auth.py'.
|
||||
First, I'll analyze the code and check for a test safety net before planning any changes.
|
||||
<tool_call>
|
||||
<function=glob>
|
||||
<parameter=path>
|
||||
tests/test_auth.py
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
<tool_call>
|
||||
<function=read_file>
|
||||
<parameter=path>
|
||||
/path/to/tests/test_auth.py
|
||||
</parameter>
|
||||
<parameter=offset>
|
||||
0
|
||||
</parameter>
|
||||
<parameter=limit>
|
||||
10
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(After analysis)
|
||||
Great, 'tests/test_auth.py' exists and covers the core authentication logic. With this safety net in place, I can safely plan the refactoring.
|
||||
I'll also confirm 'requests' is a dependency.
|
||||
<tool_call>
|
||||
<function=read_file>
|
||||
<parameter=path>
|
||||
/path/to/requirements.txt
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(After analysis)
|
||||
Looks good, 'requests' is available.
|
||||
|
||||
Here's the plan:
|
||||
1. Replace the 'urllib' calls with 'requests'.
|
||||
2. Add proper 'try...except' error handling for the new network calls.
|
||||
3. Remove the old 'urllib' import.
|
||||
4. Run the project's linter and tests to verify the changes.
|
||||
|
||||
<tool_call>
|
||||
<function=replace>
|
||||
<parameter=path>
|
||||
src/auth.py
|
||||
</parameter>
|
||||
<parameter=old_content>
|
||||
(old code content)
|
||||
</parameter>
|
||||
<parameter=new_content>
|
||||
(new code content)
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
Refactoring complete. Running verification...
|
||||
<tool_call>
|
||||
<function=run_shell_command
|
||||
<parameter=command>
|
||||
ruff check src/auth.py && pytest
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(After verification passes)
|
||||
All checks passed. This is a stable checkpoint.
|
||||
Would you like me to write a commit message and commit these changes?
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Delete the temp directory.
|
||||
model: I can run \`rm -rf /path/to/project/temp\`. This will permanently delete the directory and all its contents.
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Write tests for someFile.ts
|
||||
model:
|
||||
Okay, I can write those tests. First, I'll read someFile.ts to understand its functionality.
|
||||
<tool_call>
|
||||
<function=read_file>
|
||||
<parameter=path>
|
||||
/path/to/someFile.ts
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
Now I'll look for existing or related test files to understand current testing conventions and dependencies.
|
||||
<tool_call>
|
||||
<function>read_many_files for paths ['**/*.test.ts', 'src/**/*.spec.ts'] assuming someFile.ts is in the src directory]
|
||||
</tool_call>
|
||||
(After reviewing existing tests and the file content)
|
||||
<tool_call>
|
||||
<function=write_file>
|
||||
<parameter=path>
|
||||
/path/to/someFile.test.ts
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
I've written the tests. Now I'll run the project's test command to verify them.
|
||||
<tool_call>
|
||||
<function=run_shell_command>
|
||||
<parameter=command>
|
||||
npm run test
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(After verification passes)
|
||||
All checks passed. This is a stable checkpoint.
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Where are all the 'app.config' files in this project? I need to check their settings.
|
||||
model:
|
||||
<tool_call>
|
||||
<function=glob>
|
||||
<parameter=pattern>
|
||||
./**/app.config
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(Assuming GlobTool returns a list of paths like ['/path/to/moduleA/app.config', '/path/to/moduleB/app.config'])
|
||||
I found the following 'app.config' files:
|
||||
- /path/to/moduleA/app.config
|
||||
- /path/to/moduleB/app.config
|
||||
To help you check their settings, I can read their contents. Which one would you like to start with, or should I read all of them?
|
||||
</example>
|
||||
|
||||
# Final Reminder
|
||||
Your core function is efficient and safe assistance. Balance extreme conciseness with the crucial need for clarity, especially regarding safety and potential system modifications. Always prioritize user control and project conventions. Never make assumptions about the contents of files; instead use '${ToolNames.READ_FILE}' or '${ToolNames.READ_MANY_FILES}' to ensure you aren't making broad assumptions. Finally, you are an agent - please keep going until the user's query is completely resolved.
|
||||
Your core function is efficient and safe assistance. Balance extreme conciseness with the crucial need for clarity, especially regarding safety and potential system modifications. Always prioritize user control and project conventions. Never make assumptions about the contents of files; instead use '${ReadFileTool.Name}' or '${ReadManyFilesTool.Name}' to ensure you aren't making broad assumptions. Finally, you are an agent - please keep going until the user's query is completely resolved.
|
||||
`.trim();
|
||||
|
||||
// if GEMINI_WRITE_SYSTEM_MD is set (and not 0|false), write base system prompt to file
|
||||
@@ -461,366 +615,3 @@ You are a specialized context summarizer that creates a comprehensive markdown s
|
||||
|
||||
`.trim();
|
||||
}
|
||||
|
||||
const generalToolCallExamples = `
|
||||
# Examples (Illustrating Tone and Workflow)
|
||||
<example>
|
||||
user: 1 + 2
|
||||
model: 3
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: is 13 a prime number?
|
||||
model: true
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: start the server implemented in server.js
|
||||
model: [tool_call: ${ToolNames.SHELL} for 'node server.js &' because it must run in the background]
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Refactor the auth logic in src/auth.py to use the requests library instead of urllib.
|
||||
model: Okay, I can refactor 'src/auth.py'.
|
||||
First, I'll analyze the code and check for a test safety net before planning any changes.
|
||||
[tool_call: ${ToolNames.GLOB} for path 'tests/test_auth.py']
|
||||
[tool_call: ${ToolNames.READ_FILE} for path '/path/to/tests/test_auth.py' with offset 0 and limit 10]
|
||||
(After analysis)
|
||||
Great, 'tests/test_auth.py' exists and covers the core authentication logic. With this safety net in place, I can safely plan the refactoring.
|
||||
I'll also confirm 'requests' is a dependency.
|
||||
[tool_call: ${ToolNames.READ_FILE} for path '/path/to/requirements.txt']
|
||||
(After analysis)
|
||||
Looks good, 'requests' is available.
|
||||
|
||||
Here's the plan:
|
||||
1. Replace the 'urllib' calls with 'requests'.
|
||||
2. Add proper 'try...except' error handling for the new network calls.
|
||||
3. Remove the old 'urllib' import.
|
||||
4. Run the project's linter and tests to verify the changes.
|
||||
|
||||
[tool_call: ${ToolNames.EDIT} for path 'src/auth.py' replacing old content with new content]
|
||||
Refactoring complete. Running verification...
|
||||
[tool_call: ${ToolNames.SHELL} for 'ruff check src/auth.py && pytest']
|
||||
(After verification passes)
|
||||
All checks passed. This is a stable checkpoint.
|
||||
Would you like me to write a commit message and commit these changes?
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Delete the temp directory.
|
||||
model: I can run \`rm -rf /path/to/project/temp\`. This will permanently delete the directory and all its contents.
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Write tests for someFile.ts
|
||||
model:
|
||||
Okay, I can write those tests. First, I'll read someFile.ts to understand its functionality.
|
||||
[tool_call: ${ToolNames.READ_FILE} for path '/path/to/someFile.ts']
|
||||
Now I'll look for existing or related test files to understand current testing conventions and dependencies.
|
||||
[tool_call: ${ToolNames.READ_MANY_FILES} for paths ['**/*.test.ts', 'src/**/*.spec.ts']]
|
||||
(After reviewing existing tests and the file content)
|
||||
[tool_call: ${ToolNames.WRITE_FILE} for path '/path/to/someFile.test.ts']
|
||||
I've written the tests. Now I'll run the project's test command to verify them.
|
||||
[tool_call: ${ToolNames.SHELL} for 'npm run test']
|
||||
(After verification passes)
|
||||
All checks passed. This is a stable checkpoint.
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Where are all the 'app.config' files in this project? I need to check their settings.
|
||||
model:
|
||||
[tool_call: ${ToolNames.GLOB} for pattern './**/app.config']
|
||||
(Assuming GlobTool returns a list of paths like ['/path/to/moduleA/app.config', '/path/to/moduleB/app.config'])
|
||||
I found the following 'app.config' files:
|
||||
- /path/to/moduleA/app.config
|
||||
- /path/to/moduleB/app.config
|
||||
To help you check their settings, I can read their contents. Which one would you like to start with, or should I read all of them?
|
||||
</example>
|
||||
`.trim();
|
||||
|
||||
const qwenCoderToolCallExamples = `
|
||||
# Examples (Illustrating Tone and Workflow)
|
||||
<example>
|
||||
user: 1 + 2
|
||||
model: 3
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: is 13 a prime number?
|
||||
model: true
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: start the server implemented in server.js
|
||||
model:
|
||||
<tool_call>
|
||||
<function=${ToolNames.SHELL}>
|
||||
<parameter=command>
|
||||
node server.js &
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Refactor the auth logic in src/auth.py to use the requests library instead of urllib.
|
||||
model: Okay, I can refactor 'src/auth.py'.
|
||||
First, I'll analyze the code and check for a test safety net before planning any changes.
|
||||
<tool_call>
|
||||
<function=${ToolNames.GLOB}>
|
||||
<parameter=path>
|
||||
tests/test_auth.py
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
<tool_call>
|
||||
<function=${ToolNames.READ_FILE}>
|
||||
<parameter=path>
|
||||
/path/to/tests/test_auth.py
|
||||
</parameter>
|
||||
<parameter=offset>
|
||||
0
|
||||
</parameter>
|
||||
<parameter=limit>
|
||||
10
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(After analysis)
|
||||
Great, 'tests/test_auth.py' exists and covers the core authentication logic. With this safety net in place, I can safely plan the refactoring.
|
||||
I'll also confirm 'requests' is a dependency.
|
||||
<tool_call>
|
||||
<function=${ToolNames.READ_FILE}>
|
||||
<parameter=path>
|
||||
/path/to/requirements.txt
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(After analysis)
|
||||
Looks good, 'requests' is available.
|
||||
|
||||
Here's the plan:
|
||||
1. Replace the 'urllib' calls with 'requests'.
|
||||
2. Add proper 'try...except' error handling for the new network calls.
|
||||
3. Remove the old 'urllib' import.
|
||||
4. Run the project's linter and tests to verify the changes.
|
||||
|
||||
<tool_call>
|
||||
<function=${ToolNames.EDIT}>
|
||||
<parameter=path>
|
||||
src/auth.py
|
||||
</parameter>
|
||||
<parameter=old_content>
|
||||
(old code content)
|
||||
</parameter>
|
||||
<parameter=new_content>
|
||||
(new code content)
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
Refactoring complete. Running verification...
|
||||
<tool_call>
|
||||
<function=${ToolNames.SHELL}>
|
||||
<parameter=command>
|
||||
ruff check src/auth.py && pytest
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(After verification passes)
|
||||
All checks passed. This is a stable checkpoint.
|
||||
Would you like me to write a commit message and commit these changes?
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Delete the temp directory.
|
||||
model: I can run \`rm -rf /path/to/project/temp\`. This will permanently delete the directory and all its contents.
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Write tests for someFile.ts
|
||||
model:
|
||||
Okay, I can write those tests. First, I'll read someFile.ts to understand its functionality.
|
||||
<tool_call>
|
||||
<function=${ToolNames.READ_FILE}>
|
||||
<parameter=path>
|
||||
/path/to/someFile.ts
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
Now I'll look for existing or related test files to understand current testing conventions and dependencies.
|
||||
<tool_call>
|
||||
<function=${ToolNames.READ_MANY_FILES}>
|
||||
<parameter=paths>
|
||||
['**/*.test.ts', 'src/**/*.spec.ts']
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(After reviewing existing tests and the file content)
|
||||
<tool_call>
|
||||
<function=${ToolNames.WRITE_FILE}>
|
||||
<parameter=path>
|
||||
/path/to/someFile.test.ts
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
I've written the tests. Now I'll run the project's test command to verify them.
|
||||
<tool_call>
|
||||
<function=${ToolNames.SHELL}>
|
||||
<parameter=command>
|
||||
npm run test
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(After verification passes)
|
||||
All checks passed. This is a stable checkpoint.
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Where are all the 'app.config' files in this project? I need to check their settings.
|
||||
model:
|
||||
<tool_call>
|
||||
<function=${ToolNames.GLOB}>
|
||||
<parameter=pattern>
|
||||
./**/app.config
|
||||
</parameter>
|
||||
</function>
|
||||
</tool_call>
|
||||
(Assuming GlobTool returns a list of paths like ['/path/to/moduleA/app.config', '/path/to/moduleB/app.config'])
|
||||
I found the following 'app.config' files:
|
||||
- /path/to/moduleA/app.config
|
||||
- /path/to/moduleB/app.config
|
||||
To help you check their settings, I can read their contents. Which one would you like to start with, or should I read all of them?
|
||||
</example>
|
||||
`.trim();
|
||||
const qwenVlToolCallExamples = `
|
||||
# Examples (Illustrating Tone and Workflow)
|
||||
<example>
|
||||
user: 1 + 2
|
||||
model: 3
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: is 13 a prime number?
|
||||
model: true
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: start the server implemented in server.js
|
||||
model:
|
||||
<tool_call>
|
||||
{"name": "${ToolNames.SHELL}", "arguments": {"command": "node server.js &"}}
|
||||
</tool_call>
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Refactor the auth logic in src/auth.py to use the requests library instead of urllib.
|
||||
model: Okay, I can refactor 'src/auth.py'.
|
||||
First, I'll analyze the code and check for a test safety net before planning any changes.
|
||||
<tool_call>
|
||||
{"name": "${ToolNames.GLOB}", "arguments": {"path": "tests/test_auth.py"}}
|
||||
</tool_call>
|
||||
<tool_call>
|
||||
{"name": "${ToolNames.READ_FILE}", "arguments": {"path": "/path/to/tests/test_auth.py", "offset": 0, "limit": 10}}
|
||||
</tool_call>
|
||||
(After analysis)
|
||||
Great, 'tests/test_auth.py' exists and covers the core authentication logic. With this safety net in place, I can safely plan the refactoring.
|
||||
I'll also confirm 'requests' is a dependency.
|
||||
<tool_call>
|
||||
{"name": "${ToolNames.READ_FILE}", "arguments": {"path": "/path/to/requirements.txt"}}
|
||||
</tool_call>
|
||||
(After analysis)
|
||||
Looks good, 'requests' is available.
|
||||
|
||||
Here's the plan:
|
||||
1. Replace the 'urllib' calls with 'requests'.
|
||||
2. Add proper 'try...except' error handling for the new network calls.
|
||||
3. Remove the old 'urllib' import.
|
||||
4. Run the project's linter and tests to verify the changes.
|
||||
|
||||
<tool_call>
|
||||
{"name": "${ToolNames.EDIT}", "arguments": {"path": "src/auth.py", "old_content": "(old code content)", "new_content": "(new code content)"}}
|
||||
</tool_call>
|
||||
Refactoring complete. Running verification...
|
||||
<tool_call>
|
||||
{"name": "${ToolNames.SHELL}", "arguments": {"command": "ruff check src/auth.py && pytest"}}
|
||||
</tool_call>
|
||||
(After verification passes)
|
||||
All checks passed. This is a stable checkpoint.
|
||||
Would you like me to write a commit message and commit these changes?
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Delete the temp directory.
|
||||
model: I can run \`rm -rf /path/to/project/temp\`. This will permanently delete the directory and all its contents.
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Write tests for someFile.ts
|
||||
model:
|
||||
Okay, I can write those tests. First, I'll read someFile.ts to understand its functionality.
|
||||
<tool_call>
|
||||
{"name": "${ToolNames.READ_FILE}", "arguments": {"path": "/path/to/someFile.ts"}}
|
||||
</tool_call>
|
||||
Now I'll look for existing or related test files to understand current testing conventions and dependencies.
|
||||
<tool_call>
|
||||
{"name": "${ToolNames.READ_MANY_FILES}", "arguments": {"paths": ["**/*.test.ts", "src/**/*.spec.ts"]}}
|
||||
</tool_call>
|
||||
(After reviewing existing tests and the file content)
|
||||
<tool_call>
|
||||
{"name": "${ToolNames.WRITE_FILE}", "arguments": {"path": "/path/to/someFile.test.ts"}}
|
||||
</tool_call>
|
||||
I've written the tests. Now I'll run the project's test command to verify them.
|
||||
<tool_call>
|
||||
{"name": "${ToolNames.SHELL}", "arguments": {"command": "npm run test"}}
|
||||
</tool_call>
|
||||
(After verification passes)
|
||||
All checks passed. This is a stable checkpoint.
|
||||
</example>
|
||||
|
||||
<example>
|
||||
user: Where are all the 'app.config' files in this project? I need to check their settings.
|
||||
model:
|
||||
<tool_call>
|
||||
{"name": "${ToolNames.GLOB}", "arguments": {"pattern": "./**/app.config"}}
|
||||
</tool_call>
|
||||
(Assuming GlobTool returns a list of paths like ['/path/to/moduleA/app.config', '/path/to/moduleB/app.config'])
|
||||
I found the following 'app.config' files:
|
||||
- /path/to/moduleA/app.config
|
||||
- /path/to/moduleB/app.config
|
||||
To help you check their settings, I can read their contents. Which one would you like to start with, or should I read all of them?
|
||||
</example>
|
||||
`.trim();
|
||||
|
||||
function getToolCallExamples(model?: string): string {
|
||||
// Check for environment variable override first
|
||||
const toolCallStyle = process.env['QWEN_CODE_TOOL_CALL_STYLE'];
|
||||
if (toolCallStyle) {
|
||||
switch (toolCallStyle.toLowerCase()) {
|
||||
case 'qwen-coder':
|
||||
return qwenCoderToolCallExamples;
|
||||
case 'qwen-vl':
|
||||
return qwenVlToolCallExamples;
|
||||
case 'general':
|
||||
return generalToolCallExamples;
|
||||
default:
|
||||
console.warn(
|
||||
`Unknown QWEN_CODE_TOOL_CALL_STYLE value: ${toolCallStyle}. Using model-based detection.`,
|
||||
);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Enhanced regex-based model detection
|
||||
if (model && model.length < 100) {
|
||||
// Match qwen*-coder patterns (e.g., qwen3-coder, qwen2.5-coder, qwen-coder)
|
||||
if (/qwen[^-]*-coder/i.test(model)) {
|
||||
return qwenCoderToolCallExamples;
|
||||
}
|
||||
// Match qwen*-vl patterns (e.g., qwen-vl, qwen2-vl, qwen3-vl)
|
||||
if (/qwen[^-]*-vl/i.test(model)) {
|
||||
return qwenVlToolCallExamples;
|
||||
}
|
||||
}
|
||||
|
||||
return generalToolCallExamples;
|
||||
}
|
||||
|
||||
@@ -116,9 +116,6 @@ const PATTERNS: Array<[RegExp, TokenCount]> = [
|
||||
[/^qwen-flash-latest$/, LIMITS['1m']],
|
||||
[/^qwen-turbo.*$/, LIMITS['128k']],
|
||||
|
||||
// Qwen Vision Models
|
||||
[/^qwen-vl-max.*$/, LIMITS['128k']],
|
||||
|
||||
// -------------------
|
||||
// ByteDance Seed-OSS (512K)
|
||||
// -------------------
|
||||
|
||||
@@ -242,7 +242,7 @@ describe('Turn', () => {
|
||||
expect(turn.getDebugResponses().length).toBe(0);
|
||||
expect(reportError).toHaveBeenCalledWith(
|
||||
error,
|
||||
'Error when talking to API',
|
||||
'Error when talking to Gemini API',
|
||||
[...historyContent, reqParts],
|
||||
'Turn.run-sendMessageStream',
|
||||
);
|
||||
|
||||
@@ -310,7 +310,7 @@ export class Turn {
|
||||
const contextForReport = [...this.chat.getHistory(/*curated*/ true), req];
|
||||
await reportError(
|
||||
error,
|
||||
'Error when talking to API',
|
||||
'Error when talking to Gemini API',
|
||||
contextForReport,
|
||||
'Turn.run-sendMessageStream',
|
||||
);
|
||||
|
||||
@@ -401,9 +401,11 @@ describe('QwenContentGenerator', () => {
|
||||
expect(mockQwenClient.getAccessToken).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should count tokens without requiring authentication', async () => {
|
||||
// Clear any previous mock calls
|
||||
vi.clearAllMocks();
|
||||
it('should count tokens with valid token', async () => {
|
||||
vi.mocked(mockQwenClient.getAccessToken).mockResolvedValue({
|
||||
token: 'valid-token',
|
||||
});
|
||||
vi.mocked(mockQwenClient.getCredentials).mockReturnValue(mockCredentials);
|
||||
|
||||
const request: CountTokensParameters = {
|
||||
model: 'qwen-turbo',
|
||||
@@ -413,8 +415,7 @@ describe('QwenContentGenerator', () => {
|
||||
const result = await qwenContentGenerator.countTokens(request);
|
||||
|
||||
expect(result.totalTokens).toBe(15);
|
||||
// countTokens is a local operation and should not require OAuth credentials
|
||||
expect(mockQwenClient.getAccessToken).not.toHaveBeenCalled();
|
||||
expect(mockQwenClient.getAccessToken).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should embed content with valid token', async () => {
|
||||
@@ -1651,7 +1652,7 @@ describe('QwenContentGenerator', () => {
|
||||
SharedTokenManager.getInstance = originalGetInstance;
|
||||
});
|
||||
|
||||
it('should handle method types with token failure (except countTokens)', async () => {
|
||||
it('should handle all method types with token failure', async () => {
|
||||
const mockTokenManager = {
|
||||
getValidCredentials: vi
|
||||
.fn()
|
||||
@@ -1684,7 +1685,7 @@ describe('QwenContentGenerator', () => {
|
||||
contents: [{ parts: [{ text: 'Embed' }] }],
|
||||
};
|
||||
|
||||
// Methods requiring authentication should fail
|
||||
// All methods should fail with the same error
|
||||
await expect(
|
||||
newGenerator.generateContent(generateRequest, 'test-id'),
|
||||
).rejects.toThrow('Failed to obtain valid Qwen access token');
|
||||
@@ -1693,13 +1694,13 @@ describe('QwenContentGenerator', () => {
|
||||
newGenerator.generateContentStream(generateRequest, 'test-id'),
|
||||
).rejects.toThrow('Failed to obtain valid Qwen access token');
|
||||
|
||||
await expect(newGenerator.embedContent(embedRequest)).rejects.toThrow(
|
||||
await expect(newGenerator.countTokens(countRequest)).rejects.toThrow(
|
||||
'Failed to obtain valid Qwen access token',
|
||||
);
|
||||
|
||||
// countTokens should succeed as it's a local operation
|
||||
const countResult = await newGenerator.countTokens(countRequest);
|
||||
expect(countResult.totalTokens).toBe(15);
|
||||
await expect(newGenerator.embedContent(embedRequest)).rejects.toThrow(
|
||||
'Failed to obtain valid Qwen access token',
|
||||
);
|
||||
|
||||
SharedTokenManager.getInstance = originalGetInstance;
|
||||
});
|
||||
|
||||
@@ -180,7 +180,9 @@ export class QwenContentGenerator extends OpenAIContentGenerator {
|
||||
override async countTokens(
|
||||
request: CountTokensParameters,
|
||||
): Promise<CountTokensResponse> {
|
||||
return super.countTokens(request);
|
||||
return this.executeWithCredentialManagement(() =>
|
||||
super.countTokens(request),
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -176,7 +176,7 @@ describe('GitService', () => {
|
||||
await service.setupShadowGitRepository();
|
||||
|
||||
const expectedConfigContent =
|
||||
'[user]\n name = Qwen Code\n email = qwen-code@qwen.ai\n[commit]\n gpgsign = false\n';
|
||||
'[user]\n name = Gemini CLI\n email = gemini-cli@google.com\n[commit]\n gpgsign = false\n';
|
||||
const actualConfigContent = await fs.readFile(gitConfigPath, 'utf-8');
|
||||
expect(actualConfigContent).toBe(expectedConfigContent);
|
||||
});
|
||||
|
||||
@@ -65,7 +65,7 @@ export class GitService {
|
||||
// We don't want to inherit the user's name, email, or gpg signing
|
||||
// preferences for the shadow repository, so we create a dedicated gitconfig.
|
||||
const gitConfigContent =
|
||||
'[user]\n name = Qwen Code\n email = qwen-code@qwen.ai\n[commit]\n gpgsign = false\n';
|
||||
'[user]\n name = Gemini CLI\n email = gemini-cli@google.com\n[commit]\n gpgsign = false\n';
|
||||
await fs.writeFile(gitConfigPath, gitConfigContent);
|
||||
|
||||
const repo = simpleGit(repoDir);
|
||||
|
||||
@@ -185,7 +185,6 @@ You are a helpful assistant.
|
||||
const config = manager.parseSubagentContent(
|
||||
validMarkdown,
|
||||
validConfig.filePath,
|
||||
'project',
|
||||
);
|
||||
|
||||
expect(config.name).toBe('test-agent');
|
||||
@@ -210,7 +209,6 @@ You are a helpful assistant.
|
||||
const config = manager.parseSubagentContent(
|
||||
markdownWithTools,
|
||||
validConfig.filePath,
|
||||
'project',
|
||||
);
|
||||
|
||||
expect(config.tools).toEqual(['read_file', 'write_file']);
|
||||
@@ -231,7 +229,6 @@ You are a helpful assistant.
|
||||
const config = manager.parseSubagentContent(
|
||||
markdownWithModel,
|
||||
validConfig.filePath,
|
||||
'project',
|
||||
);
|
||||
|
||||
expect(config.modelConfig).toEqual({ model: 'custom-model', temp: 0.5 });
|
||||
@@ -252,7 +249,6 @@ You are a helpful assistant.
|
||||
const config = manager.parseSubagentContent(
|
||||
markdownWithRun,
|
||||
validConfig.filePath,
|
||||
'project',
|
||||
);
|
||||
|
||||
expect(config.runConfig).toEqual({ max_time_minutes: 5, max_turns: 10 });
|
||||
@@ -270,7 +266,6 @@ You are a helpful assistant.
|
||||
const config = manager.parseSubagentContent(
|
||||
markdownWithNumeric,
|
||||
validConfig.filePath,
|
||||
'project',
|
||||
);
|
||||
|
||||
expect(config.name).toBe('11');
|
||||
@@ -291,7 +286,6 @@ You are a helpful assistant.
|
||||
const config = manager.parseSubagentContent(
|
||||
markdownWithBoolean,
|
||||
validConfig.filePath,
|
||||
'project',
|
||||
);
|
||||
|
||||
expect(config.name).toBe('true');
|
||||
@@ -307,13 +301,8 @@ You are a helpful assistant.
|
||||
const projectConfig = manager.parseSubagentContent(
|
||||
validMarkdown,
|
||||
projectPath,
|
||||
'project',
|
||||
);
|
||||
const userConfig = manager.parseSubagentContent(
|
||||
validMarkdown,
|
||||
userPath,
|
||||
'user',
|
||||
);
|
||||
const userConfig = manager.parseSubagentContent(validMarkdown, userPath);
|
||||
|
||||
expect(projectConfig.level).toBe('project');
|
||||
expect(userConfig.level).toBe('user');
|
||||
@@ -324,11 +313,7 @@ You are a helpful assistant.
|
||||
Just content`;
|
||||
|
||||
expect(() =>
|
||||
manager.parseSubagentContent(
|
||||
invalidMarkdown,
|
||||
validConfig.filePath,
|
||||
'project',
|
||||
),
|
||||
manager.parseSubagentContent(invalidMarkdown, validConfig.filePath),
|
||||
).toThrow(SubagentError);
|
||||
});
|
||||
|
||||
@@ -341,11 +326,7 @@ You are a helpful assistant.
|
||||
`;
|
||||
|
||||
expect(() =>
|
||||
manager.parseSubagentContent(
|
||||
markdownWithoutName,
|
||||
validConfig.filePath,
|
||||
'project',
|
||||
),
|
||||
manager.parseSubagentContent(markdownWithoutName, validConfig.filePath),
|
||||
).toThrow(SubagentError);
|
||||
});
|
||||
|
||||
@@ -361,20 +342,39 @@ You are a helpful assistant.
|
||||
manager.parseSubagentContent(
|
||||
markdownWithoutDescription,
|
||||
validConfig.filePath,
|
||||
'project',
|
||||
),
|
||||
).toThrow(SubagentError);
|
||||
});
|
||||
|
||||
it('should warn when filename does not match subagent name', () => {
|
||||
const consoleSpy = vi.spyOn(console, 'warn').mockImplementation(() => {});
|
||||
const mismatchedPath = '/test/project/.qwen/agents/wrong-filename.md';
|
||||
|
||||
const config = manager.parseSubagentContent(
|
||||
validMarkdown,
|
||||
mismatchedPath,
|
||||
);
|
||||
|
||||
expect(config.name).toBe('test-agent');
|
||||
expect(consoleSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'Warning: Subagent file "wrong-filename.md" contains name "test-agent"',
|
||||
),
|
||||
);
|
||||
expect(consoleSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining(
|
||||
'Consider renaming the file to "test-agent.md"',
|
||||
),
|
||||
);
|
||||
|
||||
consoleSpy.mockRestore();
|
||||
});
|
||||
|
||||
it('should not warn when filename matches subagent name', () => {
|
||||
const consoleSpy = vi.spyOn(console, 'warn').mockImplementation(() => {});
|
||||
const matchingPath = '/test/project/.qwen/agents/test-agent.md';
|
||||
|
||||
const config = manager.parseSubagentContent(
|
||||
validMarkdown,
|
||||
matchingPath,
|
||||
'project',
|
||||
);
|
||||
const config = manager.parseSubagentContent(validMarkdown, matchingPath);
|
||||
|
||||
expect(config.name).toBe('test-agent');
|
||||
expect(consoleSpy).not.toHaveBeenCalled();
|
||||
|
||||
@@ -39,7 +39,6 @@ const AGENT_CONFIG_DIR = 'agents';
|
||||
*/
|
||||
export class SubagentManager {
|
||||
private readonly validator: SubagentValidator;
|
||||
private subagentsCache: Map<SubagentLevel, SubagentConfig[]> | null = null;
|
||||
|
||||
constructor(private readonly config: Config) {
|
||||
this.validator = new SubagentValidator();
|
||||
@@ -93,8 +92,6 @@ export class SubagentManager {
|
||||
|
||||
try {
|
||||
await fs.writeFile(filePath, content, 'utf8');
|
||||
// Clear cache after successful creation
|
||||
this.clearCache();
|
||||
} catch (error) {
|
||||
throw new SubagentError(
|
||||
`Failed to write subagent file: ${error instanceof Error ? error.message : 'Unknown error'}`,
|
||||
@@ -183,8 +180,6 @@ export class SubagentManager {
|
||||
|
||||
try {
|
||||
await fs.writeFile(existing.filePath, content, 'utf8');
|
||||
// Clear cache after successful update
|
||||
this.clearCache();
|
||||
} catch (error) {
|
||||
throw new SubagentError(
|
||||
`Failed to update subagent file: ${error instanceof Error ? error.message : 'Unknown error'}`,
|
||||
@@ -241,9 +236,6 @@ export class SubagentManager {
|
||||
name,
|
||||
);
|
||||
}
|
||||
|
||||
// Clear cache after successful deletion
|
||||
this.clearCache();
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -262,17 +254,9 @@ export class SubagentManager {
|
||||
? [options.level]
|
||||
: ['project', 'user', 'builtin'];
|
||||
|
||||
// Check if we should use cache or force refresh
|
||||
const shouldUseCache = !options.force && this.subagentsCache !== null;
|
||||
|
||||
// Initialize cache if it doesn't exist or we're forcing a refresh
|
||||
if (!shouldUseCache) {
|
||||
await this.refreshCache();
|
||||
}
|
||||
|
||||
// Collect subagents from each level (project takes precedence over user, user takes precedence over builtin)
|
||||
for (const level of levelsToCheck) {
|
||||
const levelSubagents = this.subagentsCache?.get(level) || [];
|
||||
const levelSubagents = await this.listSubagentsAtLevel(level);
|
||||
|
||||
for (const subagent of levelSubagents) {
|
||||
// Skip if we've already seen this name (precedence: project > user > builtin)
|
||||
@@ -320,30 +304,6 @@ export class SubagentManager {
|
||||
return subagents;
|
||||
}
|
||||
|
||||
/**
|
||||
* Refreshes the subagents cache by loading all subagents from disk.
|
||||
* This method is called automatically when cache is null or when force=true.
|
||||
*
|
||||
* @private
|
||||
*/
|
||||
private async refreshCache(): Promise<void> {
|
||||
this.subagentsCache = new Map();
|
||||
|
||||
const levels: SubagentLevel[] = ['project', 'user', 'builtin'];
|
||||
|
||||
for (const level of levels) {
|
||||
const levelSubagents = await this.listSubagentsAtLevel(level);
|
||||
this.subagentsCache.set(level, levelSubagents);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clears the subagents cache, forcing the next listSubagents call to reload from disk.
|
||||
*/
|
||||
clearCache(): void {
|
||||
this.subagentsCache = null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Finds a subagent by name and returns its metadata.
|
||||
*
|
||||
@@ -369,10 +329,7 @@ export class SubagentManager {
|
||||
* @returns SubagentConfig
|
||||
* @throws SubagentError if parsing fails
|
||||
*/
|
||||
async parseSubagentFile(
|
||||
filePath: string,
|
||||
level: SubagentLevel,
|
||||
): Promise<SubagentConfig> {
|
||||
async parseSubagentFile(filePath: string): Promise<SubagentConfig> {
|
||||
let content: string;
|
||||
|
||||
try {
|
||||
@@ -384,7 +341,7 @@ export class SubagentManager {
|
||||
);
|
||||
}
|
||||
|
||||
return this.parseSubagentContent(content, filePath, level);
|
||||
return this.parseSubagentContent(content, filePath);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -395,11 +352,7 @@ export class SubagentManager {
|
||||
* @returns SubagentConfig
|
||||
* @throws SubagentError if parsing fails
|
||||
*/
|
||||
parseSubagentContent(
|
||||
content: string,
|
||||
filePath: string,
|
||||
level: SubagentLevel,
|
||||
): SubagentConfig {
|
||||
parseSubagentContent(content: string, filePath: string): SubagentConfig {
|
||||
try {
|
||||
// Split frontmatter and content
|
||||
const frontmatterRegex = /^---\n([\s\S]*?)\n---\n([\s\S]*)$/;
|
||||
@@ -440,16 +393,31 @@ export class SubagentManager {
|
||||
| undefined;
|
||||
const color = frontmatter['color'] as string | undefined;
|
||||
|
||||
// Determine level from file path using robust, cross-platform check
|
||||
// A project-level agent lives under <projectRoot>/.qwen/agents
|
||||
const projectAgentsDir = path.join(
|
||||
this.config.getProjectRoot(),
|
||||
QWEN_CONFIG_DIR,
|
||||
AGENT_CONFIG_DIR,
|
||||
);
|
||||
const rel = path.relative(
|
||||
path.normalize(projectAgentsDir),
|
||||
path.normalize(filePath),
|
||||
);
|
||||
const isProjectLevel =
|
||||
rel !== '' && !rel.startsWith('..') && !path.isAbsolute(rel);
|
||||
const level: SubagentLevel = isProjectLevel ? 'project' : 'user';
|
||||
|
||||
const config: SubagentConfig = {
|
||||
name,
|
||||
description,
|
||||
tools,
|
||||
systemPrompt: systemPrompt.trim(),
|
||||
level,
|
||||
filePath,
|
||||
modelConfig: modelConfig as Partial<ModelConfig>,
|
||||
runConfig: runConfig as Partial<RunConfig>,
|
||||
color,
|
||||
level,
|
||||
};
|
||||
|
||||
// Validate the parsed configuration
|
||||
@@ -458,6 +426,16 @@ export class SubagentManager {
|
||||
throw new Error(`Validation failed: ${validation.errors.join(', ')}`);
|
||||
}
|
||||
|
||||
// Warn if filename doesn't match subagent name (potential issue)
|
||||
const expectedFilename = `${config.name}.md`;
|
||||
const actualFilename = path.basename(filePath);
|
||||
if (actualFilename !== expectedFilename) {
|
||||
console.warn(
|
||||
`Warning: Subagent file "${actualFilename}" contains name "${config.name}" but filename suggests "${path.basename(actualFilename, '.md')}". ` +
|
||||
`Consider renaming the file to "${expectedFilename}" for consistency.`,
|
||||
);
|
||||
}
|
||||
|
||||
return config;
|
||||
} catch (error) {
|
||||
throw new SubagentError(
|
||||
@@ -700,18 +678,14 @@ export class SubagentManager {
|
||||
return BuiltinAgentRegistry.getBuiltinAgents();
|
||||
}
|
||||
|
||||
const projectRoot = this.config.getProjectRoot();
|
||||
const homeDir = os.homedir();
|
||||
const isHomeDirectory = path.resolve(projectRoot) === path.resolve(homeDir);
|
||||
|
||||
// If project level is requested but project root is same as home directory,
|
||||
// return empty array to avoid conflicts between project and global agents
|
||||
if (level === 'project' && isHomeDirectory) {
|
||||
return [];
|
||||
}
|
||||
|
||||
let baseDir = level === 'project' ? projectRoot : homeDir;
|
||||
baseDir = path.join(baseDir, QWEN_CONFIG_DIR, AGENT_CONFIG_DIR);
|
||||
const baseDir =
|
||||
level === 'project'
|
||||
? path.join(
|
||||
this.config.getProjectRoot(),
|
||||
QWEN_CONFIG_DIR,
|
||||
AGENT_CONFIG_DIR,
|
||||
)
|
||||
: path.join(os.homedir(), QWEN_CONFIG_DIR, AGENT_CONFIG_DIR);
|
||||
|
||||
try {
|
||||
const files = await fs.readdir(baseDir);
|
||||
@@ -723,7 +697,7 @@ export class SubagentManager {
|
||||
const filePath = path.join(baseDir, file);
|
||||
|
||||
try {
|
||||
const config = await this.parseSubagentFile(filePath, level);
|
||||
const config = await this.parseSubagentFile(filePath);
|
||||
subagents.push(config);
|
||||
} catch (_error) {
|
||||
// Ignore invalid files
|
||||
|
||||
@@ -23,11 +23,7 @@ import {
|
||||
} from 'vitest';
|
||||
import { Config, type ConfigParameters } from '../config/config.js';
|
||||
import { DEFAULT_GEMINI_MODEL } from '../config/models.js';
|
||||
import {
|
||||
createContentGenerator,
|
||||
createContentGeneratorConfig,
|
||||
AuthType,
|
||||
} from '../core/contentGenerator.js';
|
||||
import { createContentGenerator } from '../core/contentGenerator.js';
|
||||
import { GeminiChat } from '../core/geminiChat.js';
|
||||
import { executeToolCall } from '../core/nonInteractiveToolExecutor.js';
|
||||
import type { ToolRegistry } from '../tools/tool-registry.js';
|
||||
@@ -60,7 +56,8 @@ async function createMockConfig(
|
||||
};
|
||||
const config = new Config(configParams);
|
||||
await config.initialize();
|
||||
await config.refreshAuth(AuthType.USE_GEMINI);
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
await config.refreshAuth('test-auth' as any);
|
||||
|
||||
// Mock ToolRegistry
|
||||
const mockToolRegistry = {
|
||||
@@ -167,10 +164,6 @@ describe('subagent.ts', () => {
|
||||
getGenerativeModel: vi.fn(),
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
} as any);
|
||||
vi.mocked(createContentGeneratorConfig).mockReturnValue({
|
||||
model: DEFAULT_GEMINI_MODEL,
|
||||
authType: undefined,
|
||||
});
|
||||
|
||||
mockSendMessageStream = vi.fn();
|
||||
// We mock the implementation of the constructor.
|
||||
|
||||
@@ -116,9 +116,6 @@ export interface ListSubagentsOptions {
|
||||
|
||||
/** Sort direction */
|
||||
sortOrder?: 'asc' | 'desc';
|
||||
|
||||
/** Force refresh from disk, bypassing cache. Defaults to false. */
|
||||
force?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -24,7 +24,6 @@ import { ApprovalMode } from '../config/config.js';
|
||||
import { ensureCorrectEdit } from '../utils/editCorrector.js';
|
||||
import { DEFAULT_DIFF_OPTIONS, getDiffStat } from './diffOptions.js';
|
||||
import { ReadFileTool } from './read-file.js';
|
||||
import { ToolNames } from './tool-names.js';
|
||||
import type {
|
||||
ModifiableDeclarativeTool,
|
||||
ModifyContext,
|
||||
@@ -462,7 +461,7 @@ export class EditTool
|
||||
extends BaseDeclarativeTool<EditToolParams, ToolResult>
|
||||
implements ModifiableDeclarativeTool<EditToolParams>
|
||||
{
|
||||
static readonly Name = ToolNames.EDIT;
|
||||
static readonly Name = 'edit';
|
||||
constructor(private readonly config: Config) {
|
||||
super(
|
||||
EditTool.Name,
|
||||
|
||||
@@ -62,9 +62,6 @@ describe('GlobTool', () => {
|
||||
// Ensure a noticeable difference in modification time
|
||||
await new Promise((resolve) => setTimeout(resolve, 50));
|
||||
await fs.writeFile(path.join(tempRootDir, 'newer.sortme'), 'newer_content');
|
||||
|
||||
// For type coercion testing
|
||||
await fs.mkdir(path.join(tempRootDir, '123'));
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
@@ -282,20 +279,26 @@ describe('GlobTool', () => {
|
||||
);
|
||||
});
|
||||
|
||||
it('should pass if path is provided but is not a string (type coercion)', () => {
|
||||
it('should return error if path is provided but is not a string (schema validation)', () => {
|
||||
const params = {
|
||||
pattern: '*.ts',
|
||||
path: 123,
|
||||
} as unknown as GlobToolParams; // Force incorrect type
|
||||
expect(globTool.validateToolParams(params)).toBeNull();
|
||||
};
|
||||
// @ts-expect-error - We're intentionally creating invalid params for testing
|
||||
expect(globTool.validateToolParams(params)).toBe(
|
||||
'params/path must be string',
|
||||
);
|
||||
});
|
||||
|
||||
it('should pass if case_sensitive is provided but is not a boolean (type coercion)', () => {
|
||||
it('should return error if case_sensitive is provided but is not a boolean (schema validation)', () => {
|
||||
const params = {
|
||||
pattern: '*.ts',
|
||||
case_sensitive: 'true',
|
||||
} as unknown as GlobToolParams; // Force incorrect type
|
||||
expect(globTool.validateToolParams(params)).toBeNull();
|
||||
};
|
||||
// @ts-expect-error - We're intentionally creating invalid params for testing
|
||||
expect(globTool.validateToolParams(params)).toBe(
|
||||
'params/case_sensitive must be boolean',
|
||||
);
|
||||
});
|
||||
|
||||
it("should return error if search path resolves outside the tool's root directory", () => {
|
||||
|
||||
@@ -9,7 +9,6 @@ import path from 'node:path';
|
||||
import { glob, escape } from 'glob';
|
||||
import type { ToolInvocation, ToolResult } from './tools.js';
|
||||
import { BaseDeclarativeTool, BaseToolInvocation, Kind } from './tools.js';
|
||||
import { ToolNames } from './tool-names.js';
|
||||
import { shortenPath, makeRelative } from '../utils/paths.js';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { ToolErrorType } from './tool-error.js';
|
||||
@@ -253,7 +252,7 @@ class GlobToolInvocation extends BaseToolInvocation<
|
||||
* Implementation of the Glob tool logic
|
||||
*/
|
||||
export class GlobTool extends BaseDeclarativeTool<GlobToolParams, ToolResult> {
|
||||
static readonly Name = ToolNames.GLOB;
|
||||
static readonly Name = 'glob';
|
||||
|
||||
constructor(private config: Config) {
|
||||
super(
|
||||
|
||||
@@ -12,7 +12,6 @@ import { spawn } from 'node:child_process';
|
||||
import { globStream } from 'glob';
|
||||
import type { ToolInvocation, ToolResult } from './tools.js';
|
||||
import { BaseDeclarativeTool, BaseToolInvocation, Kind } from './tools.js';
|
||||
import { ToolNames } from './tool-names.js';
|
||||
import { makeRelative, shortenPath } from '../utils/paths.js';
|
||||
import { getErrorMessage, isNodeError } from '../utils/errors.js';
|
||||
import { isGitRepository } from '../utils/gitUtils.js';
|
||||
@@ -598,7 +597,7 @@ class GrepToolInvocation extends BaseToolInvocation<
|
||||
* Implementation of the Grep tool logic (moved from CLI)
|
||||
*/
|
||||
export class GrepTool extends BaseDeclarativeTool<GrepToolParams, ToolResult> {
|
||||
static readonly Name = ToolNames.GREP;
|
||||
static readonly Name = 'search_file_content'; // Keep static name
|
||||
|
||||
constructor(private readonly config: Config) {
|
||||
super(
|
||||
|
||||
@@ -8,7 +8,6 @@ import path from 'node:path';
|
||||
import { makeRelative, shortenPath } from '../utils/paths.js';
|
||||
import type { ToolInvocation, ToolLocation, ToolResult } from './tools.js';
|
||||
import { BaseDeclarativeTool, BaseToolInvocation, Kind } from './tools.js';
|
||||
import { ToolNames } from './tool-names.js';
|
||||
|
||||
import type { PartUnion } from '@google/genai';
|
||||
import {
|
||||
@@ -137,7 +136,7 @@ export class ReadFileTool extends BaseDeclarativeTool<
|
||||
ReadFileToolParams,
|
||||
ToolResult
|
||||
> {
|
||||
static readonly Name: string = ToolNames.READ_FILE;
|
||||
static readonly Name: string = 'read_file';
|
||||
|
||||
constructor(private config: Config) {
|
||||
super(
|
||||
|
||||
@@ -191,12 +191,14 @@ describe('ReadManyFilesTool', () => {
|
||||
);
|
||||
});
|
||||
|
||||
it('should coerce non-string elements in include array', () => {
|
||||
it('should throw error if include array contains non-string elements', () => {
|
||||
const params = {
|
||||
paths: ['file1.txt'],
|
||||
include: ['*.ts', 123] as string[],
|
||||
};
|
||||
expect(() => tool.build(params)).toBeDefined();
|
||||
expect(() => tool.build(params)).toThrow(
|
||||
'params/include/1 must be string',
|
||||
);
|
||||
});
|
||||
|
||||
it('should throw error if exclude array contains non-string elements', () => {
|
||||
|
||||
@@ -6,7 +6,6 @@
|
||||
|
||||
import type { ToolInvocation, ToolResult } from './tools.js';
|
||||
import { BaseDeclarativeTool, BaseToolInvocation, Kind } from './tools.js';
|
||||
import { ToolNames } from './tool-names.js';
|
||||
import { getErrorMessage } from '../utils/errors.js';
|
||||
import * as fs from 'node:fs';
|
||||
import * as path from 'node:path';
|
||||
@@ -527,7 +526,7 @@ export class ReadManyFilesTool extends BaseDeclarativeTool<
|
||||
ReadManyFilesParams,
|
||||
ToolResult
|
||||
> {
|
||||
static readonly Name: string = ToolNames.READ_MANY_FILES;
|
||||
static readonly Name: string = 'read_many_files';
|
||||
|
||||
constructor(private config: Config) {
|
||||
const parameterSchema = {
|
||||
|
||||
@@ -9,7 +9,6 @@ import path from 'node:path';
|
||||
import os, { EOL } from 'node:os';
|
||||
import crypto from 'node:crypto';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { ToolNames } from './tool-names.js';
|
||||
import { ToolErrorType } from './tool-error.js';
|
||||
import type {
|
||||
ToolInvocation,
|
||||
@@ -404,7 +403,7 @@ export class ShellTool extends BaseDeclarativeTool<
|
||||
ShellToolParams,
|
||||
ToolResult
|
||||
> {
|
||||
static Name: string = ToolNames.SHELL;
|
||||
static Name: string = 'run_shell_command';
|
||||
private allowlist: Set<string> = new Set();
|
||||
|
||||
constructor(private readonly config: Config) {
|
||||
@@ -420,11 +419,6 @@ export class ShellTool extends BaseDeclarativeTool<
|
||||
type: 'string',
|
||||
description: getCommandDescription(),
|
||||
},
|
||||
is_background: {
|
||||
type: 'boolean',
|
||||
description:
|
||||
'Whether to run the command in background. Default is false. Set to true for long-running processes like development servers, watchers, or daemons that should continue running without blocking further commands.',
|
||||
},
|
||||
description: {
|
||||
type: 'string',
|
||||
description:
|
||||
|
||||
@@ -5,7 +5,6 @@
|
||||
*/
|
||||
|
||||
import { BaseDeclarativeTool, BaseToolInvocation, Kind } from './tools.js';
|
||||
import { ToolNames } from './tool-names.js';
|
||||
import type {
|
||||
ToolResult,
|
||||
ToolResultDisplay,
|
||||
@@ -47,7 +46,7 @@ export interface TaskParams {
|
||||
* for the model to choose from.
|
||||
*/
|
||||
export class TaskTool extends BaseDeclarativeTool<TaskParams, ToolResult> {
|
||||
static readonly Name: string = ToolNames.TASK;
|
||||
static readonly Name: string = 'task';
|
||||
|
||||
private subagentManager: SubagentManager;
|
||||
private availableSubagents: SubagentConfig[] = [];
|
||||
|
||||
@@ -1,23 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
/**
|
||||
* Tool name constants to avoid circular dependencies.
|
||||
* These constants are used across multiple files and should be kept in sync
|
||||
* with the actual tool class names.
|
||||
*/
|
||||
export const ToolNames = {
|
||||
EDIT: 'edit',
|
||||
WRITE_FILE: 'write_file',
|
||||
READ_FILE: 'read_file',
|
||||
READ_MANY_FILES: 'read_many_files',
|
||||
GREP: 'search_file_content',
|
||||
GLOB: 'glob',
|
||||
SHELL: 'run_shell_command',
|
||||
TODO_WRITE: 'todo_write',
|
||||
MEMORY: 'save_memory',
|
||||
TASK: 'task',
|
||||
} as const;
|
||||
@@ -220,12 +220,14 @@ describe('WriteFileTool', () => {
|
||||
);
|
||||
});
|
||||
|
||||
it('should coerce null content into an empty string', () => {
|
||||
it('should throw an error if the content is null', () => {
|
||||
const dirAsFilePath = path.join(rootDir, 'a_directory');
|
||||
fs.mkdirSync(dirAsFilePath);
|
||||
const params = {
|
||||
file_path: path.join(rootDir, 'test.txt'),
|
||||
file_path: dirAsFilePath,
|
||||
content: null,
|
||||
} as unknown as WriteFileToolParams; // Intentionally non-conforming
|
||||
expect(() => tool.build(params)).toBeDefined();
|
||||
expect(() => tool.build(params)).toThrow('params/content must be string');
|
||||
});
|
||||
|
||||
it('should throw error if the file_path is empty', () => {
|
||||
|
||||
@@ -31,7 +31,6 @@ import {
|
||||
ensureCorrectFileContent,
|
||||
} from '../utils/editCorrector.js';
|
||||
import { DEFAULT_DIFF_OPTIONS, getDiffStat } from './diffOptions.js';
|
||||
import { ToolNames } from './tool-names.js';
|
||||
import type {
|
||||
ModifiableDeclarativeTool,
|
||||
ModifyContext,
|
||||
@@ -404,7 +403,7 @@ export class WriteFileTool
|
||||
extends BaseDeclarativeTool<WriteFileToolParams, ToolResult>
|
||||
implements ModifiableDeclarativeTool<WriteFileToolParams>
|
||||
{
|
||||
static readonly Name: string = ToolNames.WRITE_FILE;
|
||||
static readonly Name: string = 'write_file';
|
||||
|
||||
constructor(private readonly config: Config) {
|
||||
super(
|
||||
|
||||
@@ -7,7 +7,11 @@
|
||||
import type { Content, GenerateContentConfig } from '@google/genai';
|
||||
import type { GeminiClient } from '../core/client.js';
|
||||
import type { EditToolParams } from '../tools/edit.js';
|
||||
import { ToolNames } from '../tools/tool-names.js';
|
||||
import { EditTool } from '../tools/edit.js';
|
||||
import { WriteFileTool } from '../tools/write-file.js';
|
||||
import { ReadFileTool } from '../tools/read-file.js';
|
||||
import { ReadManyFilesTool } from '../tools/read-many-files.js';
|
||||
import { GrepTool } from '../tools/grep.js';
|
||||
import { LruCache } from './LruCache.js';
|
||||
import { DEFAULT_QWEN_FLASH_MODEL } from '../config/models.js';
|
||||
import {
|
||||
@@ -81,14 +85,14 @@ async function findLastEditTimestamp(
|
||||
const history = (await client.getHistory()) ?? [];
|
||||
|
||||
// Tools that may reference the file path in their FunctionResponse `output`.
|
||||
const toolsInResp = new Set<string>([
|
||||
ToolNames.WRITE_FILE,
|
||||
ToolNames.EDIT,
|
||||
ToolNames.READ_MANY_FILES,
|
||||
ToolNames.GREP,
|
||||
const toolsInResp = new Set([
|
||||
WriteFileTool.Name,
|
||||
EditTool.Name,
|
||||
ReadManyFilesTool.Name,
|
||||
GrepTool.Name,
|
||||
]);
|
||||
// Tools that may reference the file path in their FunctionCall `args`.
|
||||
const toolsInCall = new Set<string>([...toolsInResp, ToolNames.READ_FILE]);
|
||||
const toolsInCall = new Set([...toolsInResp, ReadFileTool.Name]);
|
||||
|
||||
// Iterate backwards to find the most recent relevant action.
|
||||
for (const entry of history.slice().reverse()) {
|
||||
|
||||
@@ -1,157 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import { ImageTokenizer } from './imageTokenizer.js';
|
||||
|
||||
describe('ImageTokenizer', () => {
|
||||
const tokenizer = new ImageTokenizer();
|
||||
|
||||
describe('token calculation', () => {
|
||||
it('should calculate tokens based on image dimensions with reference logic', () => {
|
||||
const metadata = {
|
||||
width: 28,
|
||||
height: 28,
|
||||
mimeType: 'image/png',
|
||||
dataSize: 1000,
|
||||
};
|
||||
|
||||
const tokens = tokenizer.calculateTokens(metadata);
|
||||
|
||||
// 28x28 = 784 pixels = 1 image token + 2 special tokens = 3 total
|
||||
// But minimum scaling may apply for small images
|
||||
expect(tokens).toBeGreaterThanOrEqual(6); // Minimum after scaling + special tokens
|
||||
});
|
||||
|
||||
it('should calculate tokens for larger images', () => {
|
||||
const metadata = {
|
||||
width: 512,
|
||||
height: 512,
|
||||
mimeType: 'image/png',
|
||||
dataSize: 10000,
|
||||
};
|
||||
|
||||
const tokens = tokenizer.calculateTokens(metadata);
|
||||
|
||||
// 512x512 with reference logic: rounded dimensions + scaling + special tokens
|
||||
expect(tokens).toBeGreaterThan(300);
|
||||
expect(tokens).toBeLessThan(400); // Should be reasonable for 512x512
|
||||
});
|
||||
|
||||
it('should enforce minimum tokens per image with scaling', () => {
|
||||
const metadata = {
|
||||
width: 1,
|
||||
height: 1,
|
||||
mimeType: 'image/png',
|
||||
dataSize: 100,
|
||||
};
|
||||
|
||||
const tokens = tokenizer.calculateTokens(metadata);
|
||||
|
||||
// Tiny images get scaled up to minimum pixels + special tokens
|
||||
expect(tokens).toBeGreaterThanOrEqual(6); // 4 image tokens + 2 special tokens
|
||||
});
|
||||
|
||||
it('should handle very large images with scaling', () => {
|
||||
const metadata = {
|
||||
width: 8192,
|
||||
height: 8192,
|
||||
mimeType: 'image/png',
|
||||
dataSize: 100000,
|
||||
};
|
||||
|
||||
const tokens = tokenizer.calculateTokens(metadata);
|
||||
|
||||
// Very large images should be scaled down to max limit + special tokens
|
||||
expect(tokens).toBeLessThanOrEqual(16386); // 16384 max + 2 special tokens
|
||||
expect(tokens).toBeGreaterThan(16000); // Should be close to the limit
|
||||
});
|
||||
});
|
||||
|
||||
describe('PNG dimension extraction', () => {
|
||||
it('should extract dimensions from valid PNG', async () => {
|
||||
// 1x1 PNG image in base64
|
||||
const pngBase64 =
|
||||
'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChAI9jU77yQAAAABJRU5ErkJggg==';
|
||||
|
||||
const metadata = await tokenizer.extractImageMetadata(
|
||||
pngBase64,
|
||||
'image/png',
|
||||
);
|
||||
|
||||
expect(metadata.width).toBe(1);
|
||||
expect(metadata.height).toBe(1);
|
||||
expect(metadata.mimeType).toBe('image/png');
|
||||
});
|
||||
|
||||
it('should handle invalid PNG gracefully', async () => {
|
||||
const invalidBase64 = 'invalid-png-data';
|
||||
|
||||
const metadata = await tokenizer.extractImageMetadata(
|
||||
invalidBase64,
|
||||
'image/png',
|
||||
);
|
||||
|
||||
// Should return default dimensions
|
||||
expect(metadata.width).toBe(512);
|
||||
expect(metadata.height).toBe(512);
|
||||
expect(metadata.mimeType).toBe('image/png');
|
||||
});
|
||||
});
|
||||
|
||||
describe('batch processing', () => {
|
||||
it('should process multiple images serially', async () => {
|
||||
const pngBase64 =
|
||||
'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChAI9jU77yQAAAABJRU5ErkJggg==';
|
||||
|
||||
const images = [
|
||||
{ data: pngBase64, mimeType: 'image/png' },
|
||||
{ data: pngBase64, mimeType: 'image/png' },
|
||||
{ data: pngBase64, mimeType: 'image/png' },
|
||||
];
|
||||
|
||||
const tokens = await tokenizer.calculateTokensBatch(images);
|
||||
|
||||
expect(tokens).toHaveLength(3);
|
||||
expect(tokens.every((t) => t >= 4)).toBe(true); // All should have at least 4 tokens
|
||||
});
|
||||
|
||||
it('should handle mixed valid and invalid images', async () => {
|
||||
const validPng =
|
||||
'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChAI9jU77yQAAAABJRU5ErkJggg==';
|
||||
const invalidPng = 'invalid-data';
|
||||
|
||||
const images = [
|
||||
{ data: validPng, mimeType: 'image/png' },
|
||||
{ data: invalidPng, mimeType: 'image/png' },
|
||||
];
|
||||
|
||||
const tokens = await tokenizer.calculateTokensBatch(images);
|
||||
|
||||
expect(tokens).toHaveLength(2);
|
||||
expect(tokens.every((t) => t >= 4)).toBe(true); // All should have at least minimum tokens
|
||||
});
|
||||
});
|
||||
|
||||
describe('different image formats', () => {
|
||||
it('should handle different MIME types', async () => {
|
||||
const pngBase64 =
|
||||
'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChAI9jU77yQAAAABJRU5ErkJggg==';
|
||||
|
||||
const formats = ['image/png', 'image/jpeg', 'image/webp', 'image/gif'];
|
||||
|
||||
for (const mimeType of formats) {
|
||||
const metadata = await tokenizer.extractImageMetadata(
|
||||
pngBase64,
|
||||
mimeType,
|
||||
);
|
||||
expect(metadata.mimeType).toBe(mimeType);
|
||||
expect(metadata.width).toBeGreaterThan(0);
|
||||
expect(metadata.height).toBeGreaterThan(0);
|
||||
}
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,505 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import type { ImageMetadata } from './types.js';
|
||||
import { isSupportedImageMimeType } from './supportedImageFormats.js';
|
||||
|
||||
/**
|
||||
* Image tokenizer for calculating image tokens based on dimensions
|
||||
*
|
||||
* Key rules:
|
||||
* - 28x28 pixels = 1 token
|
||||
* - Minimum: 4 tokens per image
|
||||
* - Maximum: 16384 tokens per image
|
||||
* - Additional: 2 special tokens (vision_bos + vision_eos)
|
||||
* - Supports: PNG, JPEG, WebP, GIF, BMP, TIFF, HEIC formats
|
||||
*/
|
||||
export class ImageTokenizer {
|
||||
/** 28x28 pixels = 1 token */
|
||||
private static readonly PIXELS_PER_TOKEN = 28 * 28;
|
||||
|
||||
/** Minimum tokens per image */
|
||||
private static readonly MIN_TOKENS_PER_IMAGE = 4;
|
||||
|
||||
/** Maximum tokens per image */
|
||||
private static readonly MAX_TOKENS_PER_IMAGE = 16384;
|
||||
|
||||
/** Special tokens for vision markers */
|
||||
private static readonly VISION_SPECIAL_TOKENS = 2;
|
||||
|
||||
/**
|
||||
* Extract image metadata from base64 data
|
||||
*
|
||||
* @param base64Data Base64-encoded image data (with or without data URL prefix)
|
||||
* @param mimeType MIME type of the image
|
||||
* @returns Promise resolving to ImageMetadata with dimensions and format info
|
||||
*/
|
||||
async extractImageMetadata(
|
||||
base64Data: string,
|
||||
mimeType: string,
|
||||
): Promise<ImageMetadata> {
|
||||
try {
|
||||
// Check if the MIME type is supported
|
||||
if (!isSupportedImageMimeType(mimeType)) {
|
||||
console.warn(`Unsupported image format: ${mimeType}`);
|
||||
// Return default metadata for unsupported formats
|
||||
return {
|
||||
width: 512,
|
||||
height: 512,
|
||||
mimeType,
|
||||
dataSize: Math.floor(base64Data.length * 0.75),
|
||||
};
|
||||
}
|
||||
|
||||
const cleanBase64 = base64Data.replace(/^data:[^;]+;base64,/, '');
|
||||
const buffer = Buffer.from(cleanBase64, 'base64');
|
||||
const dimensions = await this.extractDimensions(buffer, mimeType);
|
||||
|
||||
return {
|
||||
width: dimensions.width,
|
||||
height: dimensions.height,
|
||||
mimeType,
|
||||
dataSize: buffer.length,
|
||||
};
|
||||
} catch (error) {
|
||||
console.warn('Failed to extract image metadata:', error);
|
||||
// Return default metadata for fallback
|
||||
return {
|
||||
width: 512,
|
||||
height: 512,
|
||||
mimeType,
|
||||
dataSize: Math.floor(base64Data.length * 0.75),
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract image dimensions from buffer based on format
|
||||
*
|
||||
* @param buffer Binary image data buffer
|
||||
* @param mimeType MIME type to determine parsing strategy
|
||||
* @returns Promise resolving to width and height dimensions
|
||||
*/
|
||||
private async extractDimensions(
|
||||
buffer: Buffer,
|
||||
mimeType: string,
|
||||
): Promise<{ width: number; height: number }> {
|
||||
if (mimeType.includes('png')) {
|
||||
return this.extractPngDimensions(buffer);
|
||||
}
|
||||
|
||||
if (mimeType.includes('jpeg') || mimeType.includes('jpg')) {
|
||||
return this.extractJpegDimensions(buffer);
|
||||
}
|
||||
|
||||
if (mimeType.includes('webp')) {
|
||||
return this.extractWebpDimensions(buffer);
|
||||
}
|
||||
|
||||
if (mimeType.includes('gif')) {
|
||||
return this.extractGifDimensions(buffer);
|
||||
}
|
||||
|
||||
if (mimeType.includes('bmp')) {
|
||||
return this.extractBmpDimensions(buffer);
|
||||
}
|
||||
|
||||
if (mimeType.includes('tiff')) {
|
||||
return this.extractTiffDimensions(buffer);
|
||||
}
|
||||
|
||||
if (mimeType.includes('heic')) {
|
||||
return this.extractHeicDimensions(buffer);
|
||||
}
|
||||
|
||||
return { width: 512, height: 512 };
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract PNG dimensions from IHDR chunk
|
||||
* PNG signature: 89 50 4E 47 0D 0A 1A 0A
|
||||
* Width/height at bytes 16-19 and 20-23 (big-endian)
|
||||
*/
|
||||
private extractPngDimensions(buffer: Buffer): {
|
||||
width: number;
|
||||
height: number;
|
||||
} {
|
||||
if (buffer.length < 24) {
|
||||
throw new Error('Invalid PNG: buffer too short');
|
||||
}
|
||||
|
||||
// Verify PNG signature
|
||||
const signature = buffer.subarray(0, 8);
|
||||
const expectedSignature = Buffer.from([
|
||||
0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a,
|
||||
]);
|
||||
if (!signature.equals(expectedSignature)) {
|
||||
throw new Error('Invalid PNG signature');
|
||||
}
|
||||
|
||||
const width = buffer.readUInt32BE(16);
|
||||
const height = buffer.readUInt32BE(20);
|
||||
|
||||
return { width, height };
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract JPEG dimensions from SOF (Start of Frame) markers
|
||||
* JPEG starts with FF D8, SOF markers: 0xC0-0xC3, 0xC5-0xC7, 0xC9-0xCB, 0xCD-0xCF
|
||||
* Dimensions at offset +5 (height) and +7 (width) from SOF marker
|
||||
*/
|
||||
private extractJpegDimensions(buffer: Buffer): {
|
||||
width: number;
|
||||
height: number;
|
||||
} {
|
||||
if (buffer.length < 4 || buffer[0] !== 0xff || buffer[1] !== 0xd8) {
|
||||
throw new Error('Invalid JPEG signature');
|
||||
}
|
||||
|
||||
let offset = 2;
|
||||
|
||||
while (offset < buffer.length - 8) {
|
||||
if (buffer[offset] !== 0xff) {
|
||||
offset++;
|
||||
continue;
|
||||
}
|
||||
|
||||
const marker = buffer[offset + 1];
|
||||
|
||||
// SOF markers
|
||||
if (
|
||||
(marker >= 0xc0 && marker <= 0xc3) ||
|
||||
(marker >= 0xc5 && marker <= 0xc7) ||
|
||||
(marker >= 0xc9 && marker <= 0xcb) ||
|
||||
(marker >= 0xcd && marker <= 0xcf)
|
||||
) {
|
||||
const height = buffer.readUInt16BE(offset + 5);
|
||||
const width = buffer.readUInt16BE(offset + 7);
|
||||
return { width, height };
|
||||
}
|
||||
|
||||
const segmentLength = buffer.readUInt16BE(offset + 2);
|
||||
offset += 2 + segmentLength;
|
||||
}
|
||||
|
||||
throw new Error('Could not find JPEG dimensions');
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract WebP dimensions from RIFF container
|
||||
* Supports VP8, VP8L, and VP8X formats
|
||||
*/
|
||||
private extractWebpDimensions(buffer: Buffer): {
|
||||
width: number;
|
||||
height: number;
|
||||
} {
|
||||
if (buffer.length < 30) {
|
||||
throw new Error('Invalid WebP: too short');
|
||||
}
|
||||
|
||||
const riffSignature = buffer.subarray(0, 4).toString('ascii');
|
||||
const webpSignature = buffer.subarray(8, 12).toString('ascii');
|
||||
|
||||
if (riffSignature !== 'RIFF' || webpSignature !== 'WEBP') {
|
||||
throw new Error('Invalid WebP signature');
|
||||
}
|
||||
|
||||
const format = buffer.subarray(12, 16).toString('ascii');
|
||||
|
||||
if (format === 'VP8 ') {
|
||||
const width = buffer.readUInt16LE(26) & 0x3fff;
|
||||
const height = buffer.readUInt16LE(28) & 0x3fff;
|
||||
return { width, height };
|
||||
} else if (format === 'VP8L') {
|
||||
const bits = buffer.readUInt32LE(21);
|
||||
const width = (bits & 0x3fff) + 1;
|
||||
const height = ((bits >> 14) & 0x3fff) + 1;
|
||||
return { width, height };
|
||||
} else if (format === 'VP8X') {
|
||||
const width = (buffer.readUInt32LE(24) & 0xffffff) + 1;
|
||||
const height = (buffer.readUInt32LE(26) & 0xffffff) + 1;
|
||||
return { width, height };
|
||||
}
|
||||
|
||||
throw new Error('Unsupported WebP format');
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract GIF dimensions from header
|
||||
* Supports GIF87a and GIF89a formats
|
||||
*/
|
||||
private extractGifDimensions(buffer: Buffer): {
|
||||
width: number;
|
||||
height: number;
|
||||
} {
|
||||
if (buffer.length < 10) {
|
||||
throw new Error('Invalid GIF: too short');
|
||||
}
|
||||
|
||||
const signature = buffer.subarray(0, 6).toString('ascii');
|
||||
if (signature !== 'GIF87a' && signature !== 'GIF89a') {
|
||||
throw new Error('Invalid GIF signature');
|
||||
}
|
||||
|
||||
const width = buffer.readUInt16LE(6);
|
||||
const height = buffer.readUInt16LE(8);
|
||||
|
||||
return { width, height };
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate tokens for an image based on its metadata
|
||||
*
|
||||
* @param metadata Image metadata containing width, height, and format info
|
||||
* @returns Total token count including base image tokens and special tokens
|
||||
*/
|
||||
calculateTokens(metadata: ImageMetadata): number {
|
||||
return this.calculateTokensWithScaling(metadata.width, metadata.height);
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate tokens with scaling logic
|
||||
*
|
||||
* Steps:
|
||||
* 1. Normalize to 28-pixel multiples
|
||||
* 2. Scale large images down, small images up
|
||||
* 3. Calculate tokens: pixels / 784 + 2 special tokens
|
||||
*
|
||||
* @param width Original image width in pixels
|
||||
* @param height Original image height in pixels
|
||||
* @returns Total token count for the image
|
||||
*/
|
||||
private calculateTokensWithScaling(width: number, height: number): number {
|
||||
// Normalize to 28-pixel multiples
|
||||
let hBar = Math.round(height / 28) * 28;
|
||||
let wBar = Math.round(width / 28) * 28;
|
||||
|
||||
// Define pixel boundaries
|
||||
const minPixels =
|
||||
ImageTokenizer.MIN_TOKENS_PER_IMAGE * ImageTokenizer.PIXELS_PER_TOKEN;
|
||||
const maxPixels =
|
||||
ImageTokenizer.MAX_TOKENS_PER_IMAGE * ImageTokenizer.PIXELS_PER_TOKEN;
|
||||
|
||||
// Apply scaling
|
||||
if (hBar * wBar > maxPixels) {
|
||||
// Scale down large images
|
||||
const beta = Math.sqrt((height * width) / maxPixels);
|
||||
hBar = Math.floor(height / beta / 28) * 28;
|
||||
wBar = Math.floor(width / beta / 28) * 28;
|
||||
} else if (hBar * wBar < minPixels) {
|
||||
// Scale up small images
|
||||
const beta = Math.sqrt(minPixels / (height * width));
|
||||
hBar = Math.ceil((height * beta) / 28) * 28;
|
||||
wBar = Math.ceil((width * beta) / 28) * 28;
|
||||
}
|
||||
|
||||
// Calculate tokens
|
||||
const imageTokens = Math.floor(
|
||||
(hBar * wBar) / ImageTokenizer.PIXELS_PER_TOKEN,
|
||||
);
|
||||
|
||||
return imageTokens + ImageTokenizer.VISION_SPECIAL_TOKENS;
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate tokens for multiple images serially
|
||||
*
|
||||
* @param base64DataArray Array of image data with MIME type information
|
||||
* @returns Promise resolving to array of token counts in same order as input
|
||||
*/
|
||||
async calculateTokensBatch(
|
||||
base64DataArray: Array<{ data: string; mimeType: string }>,
|
||||
): Promise<number[]> {
|
||||
const results: number[] = [];
|
||||
|
||||
for (const { data, mimeType } of base64DataArray) {
|
||||
try {
|
||||
const metadata = await this.extractImageMetadata(data, mimeType);
|
||||
results.push(this.calculateTokens(metadata));
|
||||
} catch (error) {
|
||||
console.warn('Error calculating tokens for image:', error);
|
||||
// Return minimum tokens as fallback
|
||||
results.push(
|
||||
ImageTokenizer.MIN_TOKENS_PER_IMAGE +
|
||||
ImageTokenizer.VISION_SPECIAL_TOKENS,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract BMP dimensions from header
|
||||
* BMP signature: 42 4D (BM)
|
||||
* Width/height at bytes 18-21 and 22-25 (little-endian)
|
||||
*/
|
||||
private extractBmpDimensions(buffer: Buffer): {
|
||||
width: number;
|
||||
height: number;
|
||||
} {
|
||||
if (buffer.length < 26) {
|
||||
throw new Error('Invalid BMP: buffer too short');
|
||||
}
|
||||
|
||||
// Verify BMP signature
|
||||
if (buffer[0] !== 0x42 || buffer[1] !== 0x4d) {
|
||||
throw new Error('Invalid BMP signature');
|
||||
}
|
||||
|
||||
const width = buffer.readUInt32LE(18);
|
||||
const height = buffer.readUInt32LE(22);
|
||||
|
||||
return { width, height: Math.abs(height) }; // Height can be negative for top-down BMPs
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract TIFF dimensions from IFD (Image File Directory)
|
||||
* TIFF can be little-endian (II) or big-endian (MM)
|
||||
* Width/height are stored in IFD entries with tags 0x0100 and 0x0101
|
||||
*/
|
||||
private extractTiffDimensions(buffer: Buffer): {
|
||||
width: number;
|
||||
height: number;
|
||||
} {
|
||||
if (buffer.length < 8) {
|
||||
throw new Error('Invalid TIFF: buffer too short');
|
||||
}
|
||||
|
||||
// Check byte order
|
||||
const byteOrder = buffer.subarray(0, 2).toString('ascii');
|
||||
const isLittleEndian = byteOrder === 'II';
|
||||
const isBigEndian = byteOrder === 'MM';
|
||||
|
||||
if (!isLittleEndian && !isBigEndian) {
|
||||
throw new Error('Invalid TIFF byte order');
|
||||
}
|
||||
|
||||
// Read magic number (should be 42)
|
||||
const magic = isLittleEndian
|
||||
? buffer.readUInt16LE(2)
|
||||
: buffer.readUInt16BE(2);
|
||||
if (magic !== 42) {
|
||||
throw new Error('Invalid TIFF magic number');
|
||||
}
|
||||
|
||||
// Read IFD offset
|
||||
const ifdOffset = isLittleEndian
|
||||
? buffer.readUInt32LE(4)
|
||||
: buffer.readUInt32BE(4);
|
||||
|
||||
if (ifdOffset >= buffer.length) {
|
||||
throw new Error('Invalid TIFF IFD offset');
|
||||
}
|
||||
|
||||
// Read number of directory entries
|
||||
const numEntries = isLittleEndian
|
||||
? buffer.readUInt16LE(ifdOffset)
|
||||
: buffer.readUInt16BE(ifdOffset);
|
||||
|
||||
let width = 0;
|
||||
let height = 0;
|
||||
|
||||
// Parse IFD entries
|
||||
for (let i = 0; i < numEntries; i++) {
|
||||
const entryOffset = ifdOffset + 2 + i * 12;
|
||||
|
||||
if (entryOffset + 12 > buffer.length) break;
|
||||
|
||||
const tag = isLittleEndian
|
||||
? buffer.readUInt16LE(entryOffset)
|
||||
: buffer.readUInt16BE(entryOffset);
|
||||
|
||||
const type = isLittleEndian
|
||||
? buffer.readUInt16LE(entryOffset + 2)
|
||||
: buffer.readUInt16BE(entryOffset + 2);
|
||||
|
||||
const value = isLittleEndian
|
||||
? buffer.readUInt32LE(entryOffset + 8)
|
||||
: buffer.readUInt32BE(entryOffset + 8);
|
||||
|
||||
if (tag === 0x0100) {
|
||||
// ImageWidth
|
||||
width = type === 3 ? value : value; // SHORT or LONG
|
||||
} else if (tag === 0x0101) {
|
||||
// ImageLength (height)
|
||||
height = type === 3 ? value : value; // SHORT or LONG
|
||||
}
|
||||
|
||||
if (width > 0 && height > 0) break;
|
||||
}
|
||||
|
||||
if (width === 0 || height === 0) {
|
||||
throw new Error('Could not find TIFF dimensions');
|
||||
}
|
||||
|
||||
return { width, height };
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract HEIC dimensions from meta box
|
||||
* HEIC is based on ISO Base Media File Format
|
||||
* This is a simplified implementation that looks for 'ispe' (Image Spatial Extents) box
|
||||
*/
|
||||
private extractHeicDimensions(buffer: Buffer): {
|
||||
width: number;
|
||||
height: number;
|
||||
} {
|
||||
if (buffer.length < 12) {
|
||||
throw new Error('Invalid HEIC: buffer too short');
|
||||
}
|
||||
|
||||
// Check for ftyp box with HEIC brand
|
||||
const ftypBox = buffer.subarray(4, 8).toString('ascii');
|
||||
if (ftypBox !== 'ftyp') {
|
||||
throw new Error('Invalid HEIC: missing ftyp box');
|
||||
}
|
||||
|
||||
const brand = buffer.subarray(8, 12).toString('ascii');
|
||||
if (!['heic', 'heix', 'hevc', 'hevx'].includes(brand)) {
|
||||
throw new Error('Invalid HEIC brand');
|
||||
}
|
||||
|
||||
// Look for meta box and then ispe box
|
||||
let offset = 0;
|
||||
while (offset < buffer.length - 8) {
|
||||
const boxSize = buffer.readUInt32BE(offset);
|
||||
const boxType = buffer.subarray(offset + 4, offset + 8).toString('ascii');
|
||||
|
||||
if (boxType === 'meta') {
|
||||
// Look for ispe box inside meta box
|
||||
const metaOffset = offset + 8;
|
||||
let innerOffset = metaOffset + 4; // Skip version and flags
|
||||
|
||||
while (innerOffset < offset + boxSize - 8) {
|
||||
const innerBoxSize = buffer.readUInt32BE(innerOffset);
|
||||
const innerBoxType = buffer
|
||||
.subarray(innerOffset + 4, innerOffset + 8)
|
||||
.toString('ascii');
|
||||
|
||||
if (innerBoxType === 'ispe') {
|
||||
// Found Image Spatial Extents box
|
||||
if (innerOffset + 20 <= buffer.length) {
|
||||
const width = buffer.readUInt32BE(innerOffset + 12);
|
||||
const height = buffer.readUInt32BE(innerOffset + 16);
|
||||
return { width, height };
|
||||
}
|
||||
}
|
||||
|
||||
if (innerBoxSize === 0) break;
|
||||
innerOffset += innerBoxSize;
|
||||
}
|
||||
}
|
||||
|
||||
if (boxSize === 0) break;
|
||||
offset += boxSize;
|
||||
}
|
||||
|
||||
// Fallback: return default dimensions if we can't parse the structure
|
||||
console.warn('Could not extract HEIC dimensions, using default');
|
||||
return { width: 512, height: 512 };
|
||||
}
|
||||
}
|
||||
@@ -1,40 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
export { DefaultRequestTokenizer } from './requestTokenizer.js';
|
||||
import { DefaultRequestTokenizer } from './requestTokenizer.js';
|
||||
export { TextTokenizer } from './textTokenizer.js';
|
||||
export { ImageTokenizer } from './imageTokenizer.js';
|
||||
|
||||
export type {
|
||||
RequestTokenizer,
|
||||
TokenizerConfig,
|
||||
TokenCalculationResult,
|
||||
ImageMetadata,
|
||||
} from './types.js';
|
||||
|
||||
// Singleton instance for convenient usage
|
||||
let defaultTokenizer: DefaultRequestTokenizer | null = null;
|
||||
|
||||
/**
|
||||
* Get the default request tokenizer instance
|
||||
*/
|
||||
export function getDefaultTokenizer(): DefaultRequestTokenizer {
|
||||
if (!defaultTokenizer) {
|
||||
defaultTokenizer = new DefaultRequestTokenizer();
|
||||
}
|
||||
return defaultTokenizer;
|
||||
}
|
||||
|
||||
/**
|
||||
* Dispose of the default tokenizer instance
|
||||
*/
|
||||
export async function disposeDefaultTokenizer(): Promise<void> {
|
||||
if (defaultTokenizer) {
|
||||
await defaultTokenizer.dispose();
|
||||
defaultTokenizer = null;
|
||||
}
|
||||
}
|
||||
@@ -1,293 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { DefaultRequestTokenizer } from './requestTokenizer.js';
|
||||
import type { CountTokensParameters } from '@google/genai';
|
||||
|
||||
describe('DefaultRequestTokenizer', () => {
|
||||
let tokenizer: DefaultRequestTokenizer;
|
||||
|
||||
beforeEach(() => {
|
||||
tokenizer = new DefaultRequestTokenizer();
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await tokenizer.dispose();
|
||||
});
|
||||
|
||||
describe('text token calculation', () => {
|
||||
it('should calculate tokens for simple text content', async () => {
|
||||
const request: CountTokensParameters = {
|
||||
model: 'test-model',
|
||||
contents: [
|
||||
{
|
||||
role: 'user',
|
||||
parts: [{ text: 'Hello, world!' }],
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const result = await tokenizer.calculateTokens(request);
|
||||
|
||||
expect(result.totalTokens).toBeGreaterThan(0);
|
||||
expect(result.breakdown.textTokens).toBeGreaterThan(0);
|
||||
expect(result.breakdown.imageTokens).toBe(0);
|
||||
expect(result.processingTime).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
it('should handle multiple text parts', async () => {
|
||||
const request: CountTokensParameters = {
|
||||
model: 'test-model',
|
||||
contents: [
|
||||
{
|
||||
role: 'user',
|
||||
parts: [
|
||||
{ text: 'First part' },
|
||||
{ text: 'Second part' },
|
||||
{ text: 'Third part' },
|
||||
],
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const result = await tokenizer.calculateTokens(request);
|
||||
|
||||
expect(result.totalTokens).toBeGreaterThan(0);
|
||||
expect(result.breakdown.textTokens).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
it('should handle string content', async () => {
|
||||
const request: CountTokensParameters = {
|
||||
model: 'test-model',
|
||||
contents: ['Simple string content'],
|
||||
};
|
||||
|
||||
const result = await tokenizer.calculateTokens(request);
|
||||
|
||||
expect(result.totalTokens).toBeGreaterThan(0);
|
||||
expect(result.breakdown.textTokens).toBeGreaterThan(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('image token calculation', () => {
|
||||
it('should calculate tokens for image content', async () => {
|
||||
// Create a simple 1x1 PNG image in base64
|
||||
const pngBase64 =
|
||||
'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChAI9jU77yQAAAABJRU5ErkJggg==';
|
||||
|
||||
const request: CountTokensParameters = {
|
||||
model: 'test-model',
|
||||
contents: [
|
||||
{
|
||||
role: 'user',
|
||||
parts: [
|
||||
{
|
||||
inlineData: {
|
||||
mimeType: 'image/png',
|
||||
data: pngBase64,
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const result = await tokenizer.calculateTokens(request);
|
||||
|
||||
expect(result.totalTokens).toBeGreaterThanOrEqual(4); // Minimum 4 tokens per image
|
||||
expect(result.breakdown.imageTokens).toBeGreaterThanOrEqual(4);
|
||||
expect(result.breakdown.textTokens).toBe(0);
|
||||
});
|
||||
|
||||
it('should handle multiple images', async () => {
|
||||
const pngBase64 =
|
||||
'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChAI9jU77yQAAAABJRU5ErkJggg==';
|
||||
|
||||
const request: CountTokensParameters = {
|
||||
model: 'test-model',
|
||||
contents: [
|
||||
{
|
||||
role: 'user',
|
||||
parts: [
|
||||
{
|
||||
inlineData: {
|
||||
mimeType: 'image/png',
|
||||
data: pngBase64,
|
||||
},
|
||||
},
|
||||
{
|
||||
inlineData: {
|
||||
mimeType: 'image/png',
|
||||
data: pngBase64,
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const result = await tokenizer.calculateTokens(request);
|
||||
|
||||
expect(result.totalTokens).toBeGreaterThanOrEqual(8); // At least 4 tokens per image
|
||||
expect(result.breakdown.imageTokens).toBeGreaterThanOrEqual(8);
|
||||
});
|
||||
});
|
||||
|
||||
describe('mixed content', () => {
|
||||
it('should handle text and image content together', async () => {
|
||||
const pngBase64 =
|
||||
'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChAI9jU77yQAAAABJRU5ErkJggg==';
|
||||
|
||||
const request: CountTokensParameters = {
|
||||
model: 'test-model',
|
||||
contents: [
|
||||
{
|
||||
role: 'user',
|
||||
parts: [
|
||||
{ text: 'Here is an image:' },
|
||||
{
|
||||
inlineData: {
|
||||
mimeType: 'image/png',
|
||||
data: pngBase64,
|
||||
},
|
||||
},
|
||||
{ text: 'What do you see?' },
|
||||
],
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const result = await tokenizer.calculateTokens(request);
|
||||
|
||||
expect(result.totalTokens).toBeGreaterThan(4);
|
||||
expect(result.breakdown.textTokens).toBeGreaterThan(0);
|
||||
expect(result.breakdown.imageTokens).toBeGreaterThanOrEqual(4);
|
||||
});
|
||||
});
|
||||
|
||||
describe('function content', () => {
|
||||
it('should handle function calls', async () => {
|
||||
const request: CountTokensParameters = {
|
||||
model: 'test-model',
|
||||
contents: [
|
||||
{
|
||||
role: 'user',
|
||||
parts: [
|
||||
{
|
||||
functionCall: {
|
||||
name: 'test_function',
|
||||
args: { param1: 'value1', param2: 42 },
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const result = await tokenizer.calculateTokens(request);
|
||||
|
||||
expect(result.totalTokens).toBeGreaterThan(0);
|
||||
expect(result.breakdown.otherTokens).toBeGreaterThan(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('empty content', () => {
|
||||
it('should handle empty request', async () => {
|
||||
const request: CountTokensParameters = {
|
||||
model: 'test-model',
|
||||
contents: [],
|
||||
};
|
||||
|
||||
const result = await tokenizer.calculateTokens(request);
|
||||
|
||||
expect(result.totalTokens).toBe(0);
|
||||
expect(result.breakdown.textTokens).toBe(0);
|
||||
expect(result.breakdown.imageTokens).toBe(0);
|
||||
});
|
||||
|
||||
it('should handle undefined contents', async () => {
|
||||
const request: CountTokensParameters = {
|
||||
model: 'test-model',
|
||||
contents: [],
|
||||
};
|
||||
|
||||
const result = await tokenizer.calculateTokens(request);
|
||||
|
||||
expect(result.totalTokens).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('configuration', () => {
|
||||
it('should use custom text encoding', async () => {
|
||||
const request: CountTokensParameters = {
|
||||
model: 'test-model',
|
||||
contents: [
|
||||
{
|
||||
role: 'user',
|
||||
parts: [{ text: 'Test text for encoding' }],
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const result = await tokenizer.calculateTokens(request, {
|
||||
textEncoding: 'cl100k_base',
|
||||
});
|
||||
|
||||
expect(result.totalTokens).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
it('should process multiple images serially', async () => {
|
||||
const pngBase64 =
|
||||
'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChAI9jU77yQAAAABJRU5ErkJggg==';
|
||||
|
||||
const request: CountTokensParameters = {
|
||||
model: 'test-model',
|
||||
contents: [
|
||||
{
|
||||
role: 'user',
|
||||
parts: Array(10).fill({
|
||||
inlineData: {
|
||||
mimeType: 'image/png',
|
||||
data: pngBase64,
|
||||
},
|
||||
}),
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const result = await tokenizer.calculateTokens(request);
|
||||
|
||||
expect(result.totalTokens).toBeGreaterThanOrEqual(60); // At least 6 tokens per image * 10 images
|
||||
});
|
||||
});
|
||||
|
||||
describe('error handling', () => {
|
||||
it('should handle malformed image data gracefully', async () => {
|
||||
const request: CountTokensParameters = {
|
||||
model: 'test-model',
|
||||
contents: [
|
||||
{
|
||||
role: 'user',
|
||||
parts: [
|
||||
{
|
||||
inlineData: {
|
||||
mimeType: 'image/png',
|
||||
data: 'invalid-base64-data',
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const result = await tokenizer.calculateTokens(request);
|
||||
|
||||
// Should still return some tokens (fallback to minimum)
|
||||
expect(result.totalTokens).toBeGreaterThanOrEqual(4);
|
||||
});
|
||||
});
|
||||
});
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user